US20110261224A1 - Digital camera and storage medium for image signal processing for white balance control - Google Patents

Digital camera and storage medium for image signal processing for white balance control Download PDF

Info

Publication number
US20110261224A1
US20110261224A1 US13/067,811 US201113067811A US2011261224A1 US 20110261224 A1 US20110261224 A1 US 20110261224A1 US 201113067811 A US201113067811 A US 201113067811A US 2011261224 A1 US2011261224 A1 US 2011261224A1
Authority
US
United States
Prior art keywords
image
processing
data
white balance
circuit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/067,811
Inventor
Masahiro Suzuki
Michihiro Tamune
Zhe-Hong Chen
Masahiro Juen
Yutaka Tsuda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nikon Corp
Original Assignee
Nikon Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP10183921A external-priority patent/JP2000023085A/en
Priority claimed from JP10183920A external-priority patent/JP2000023184A/en
Priority claimed from JP10183918A external-priority patent/JP2000023083A/en
Priority claimed from JP10183919A external-priority patent/JP2000023084A/en
Priority claimed from JP23732198A external-priority patent/JP4182566B2/en
Priority claimed from JP21329999A external-priority patent/JP4281161B2/en
Application filed by Nikon Corp filed Critical Nikon Corp
Priority to US13/067,811 priority Critical patent/US20110261224A1/en
Publication of US20110261224A1 publication Critical patent/US20110261224A1/en
Priority to US13/848,424 priority patent/US8878956B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2101/00Still video cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/775Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/78Television signal recording using magnetic recording
    • H04N5/781Television signal recording using magnetic recording on disks or drums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/907Television signal recording using static stores, e.g. storage tubes or semiconductor memories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/7921Processing of colour television signals in connection with recording for more than one processing mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • H04N9/8047Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction using transform coding

Definitions

  • the present invention relates to a digital camera that stores in memory a subject as image data that are electronically compressed, and it also relates to a storage medium that stores an image signal processing program. Furthermore, the present invention relates to a carrier wave that is encoded to transmit a control program for white balance adjustment on image data. It also relates to an electronic camera that allows selection to be made between recording of irreversible image data and recording of raw data.
  • Electronic still cameras in the known art include the type provided with a viewfinder device to which a subject image having passed through a taking lens is guided by a quick return mirror, an image-capturing device such as a CCD that is provided at a rearward of the quick return mirror to capture an image of the subject image and output image data, an image processing circuit that performs image processing such as white balance and gamma correction on the image data output by the image-capturing device, a compression circuit that compresses the data which have undergone image processing through a method such as JPEG and stores the data in a storage medium such as a flash memory and a monitor that displays the data having undergone the image processing.
  • a viewfinder device to which a subject image having passed through a taking lens is guided by a quick return mirror
  • an image-capturing device such as a CCD that is provided at a rearward of the quick return mirror to capture an image of the subject image and output image data
  • an image processing circuit that performs image processing such as white balance and gamma correction on the image data output
  • parameters such as the R gain and the B gain for white balance adjustment or the gradation curve for gamma correction are calculated based upon a preset algorithm.
  • the image data are converted to 16 ⁇ 8 sets of brightness data Y and 8 ⁇ 8 sets of Cr and Cb color difference data for JPEG compression.
  • the image-capturing device in such an electronic still camera in the prior art structured as described above presents the following problems.
  • Both the image pre-treatment such as white balance or gamma correction and the image post-treatment, in which the data that have undergone the image pre-treatment are formatted for the JPEG compression, are performed in units of individual lines in correspondence to the read performed at the CCD. Because of this, in a high image quality electronic still camera with the number of pixels at the CCD exceeding two million, the capacity of the line buffer memory employed for pipeline operation and the like, is bound to be very large, resulting in the camera becoming expensive. This problem may be explained as follows.
  • N ⁇ M sets of image data corresponding to one screen output by the image-capturing element are output in point sequence in units of individual lines.
  • a line buffer memory supporting four lines is required if the filtering processing is to be performed in sets of 5 ⁇ 5. In other words, the processing can be performed only when image data corresponding to four lines have been accumulated in the memory.
  • Such a line buffer memory supporting four lines is required for each of the various types of processing such as filtering processing and interpolation processing.
  • a line buffer memory that supports four lines is provided at a 1-chip processing IC for each of the various types of processing required, such as the filtering processing and the interpolation processing described above, the ratio of the area occupied by the memory increases, which leads to an increase in the number of gates at the 1-chip processing IC, resulting in higher cost.
  • the cost will be especially high.
  • the line buffer memory is provided outside the 1-chip processing IC, twenty 10-bit input/output pins, for instance, will be required. This means that 20 input/output pins will be necessary for each line buffer memory to result in an increase in the package size of the 1-chip processing IC.
  • the interpolation processing for an (R-G) signal and a (B-G) signal, matrix processing through which a Y signal, a Cr signal and a Cb signal are generated using the (R-G) signal, the (B-G) signal and the G signal and LPF processing through which low frequency signals are extracted from the Y signal, the Cr signal and the Cb signal is performed in time sequence to format the data for JPEG compression and to suppress false colors and color moire from occurring.
  • a single primary color type CCD two CCDs (one for G and the other for R/B) or three CCDs (one each for R, G and B) are employed.
  • an RGB color filter is provided at the front surface of each pixel at the CCD, an R signal, a G signal or a B signal is missing from a given pixel.
  • interpolation is performed for pixels without a G signal component by using the G signals of pixels that have been actually obtained to generate G signals for all the pixels, and interpolation in regard to the (R-G) signal and the (B-G) signals is likewise performed.
  • the same principle applies when using two CCDs, as well.
  • the first type of data i.e., irreversible compressed data are advantageous in that since the code volume is relatively small, a large number of images can be stored in an external recording medium such as a memory card. In addition, they are recorded in a general-purpose format which allows data decoded by using a common image viewing software program or the like to be printed or displayed directly.
  • the second type of data i.e., raw data are image data faithful to the output signal from the image-capturing device.
  • a data recording format of raw data facilitates external processing. Since raw data which, undergo very little irreversible gradation conversion or data compression, contain a large volume of information such as the number of quantization bits, they have a wide dynamic range as image information. Thus, they provide an advantage in that they can be processed in an ideal manner without the tendency to lose fine gradation components. For this reason, highly advanced data processing and a higher quality are required in this type of raw data.
  • the data format of raw data is particularly suited for printing and design applications.
  • an electronic camera requires a greater length of time for image processing compared to cameras using a silver halide film.
  • it is crucial to minimize the length of time required for image processing.
  • a raw data read/write operation performed via an image memory is always necessary. This tends to lead to a delay occurring in signal processing performed on irreversible compressed data by a length of time corresponding to the length of time spent on the raw data read/write.
  • processing circuits that perform relatively complex processing are concentrated in a processing unit at a stage preceding the stage for gamma control operation. Since raw data with a large number of quantization bits are handled in this state at these processing circuits, the circuit structures of the processing circuits tend to be complex and there is also a problem of a greater length of time required for signal processing.
  • a first object of the present invention is to provide a digital camera that does not necessitate any increase the capacity of buffer memory and thus, achieves a reduction in cost even when the number of pixels is great.
  • a second object of the present invention is to provide a storage medium that stores a program for implementing signal processing which achieves a reduction in the required capacity of the buffer memory even when processing image data for which image-capturing has been performed using an image-capturing device having a large number of pixels.
  • a third object of the present invention is to provide a digital camera that achieves a reduction in the length of time required for data formatting or processing through which false colors or moire is prevented even when the number of pixels is large.
  • a fourth object of the present invention is to provide a storage medium that stores a program for implementing signal processing in which data formatting and processing for preventing false colors and moire can be 3-4 performed within a short period of time even when handling image data for which image-capturing has been performed using an image-capturing device with a great number of pixels.
  • a fifth object of the present invention is to provide a digital camera that suppresses the color-cast phenomenon occurring due to an error manifesting following the white balance adjustment performed by an external sensor to a satisfactory degree.
  • a sixth object of the present invention is to provide a storage medium that stores a program for implementing signal processing through which the color-cast phenomenon occurring due to an error manifesting after the white balance adjustment performed by an external sensor can be suppressed to a satisfactory degree.
  • the digital camera comprises an image-capturing device that captures a subject image having passed through a taking lens and outputs image data, a recording processing circuit that performs recording processing on the image data and an image processing circuit that performs a pre-treatment (includes gamma correction and white balance correction) on the image data corresponding to N lines ⁇ M rows output by the image-capturing device in units of individual lines in line sequence and then performs format processing (includes interpolation processing, LPF processing, BPF processing and color difference signal calculation processing) that corresponds to the type of recording performed at the recording processing circuit on the image data having undergone the pre-treatment in units of individual blocks corresponding to n lines ⁇ m rows (N>n, M>m) in block sequence.
  • a pre-treatment includes gamma correction and white balance correction
  • format processing includes interpolation processing, LPF processing, BPF processing and color difference signal calculation processing
  • the image processing performed in this digital camera may be implemented on a computer.
  • the program stored in a storage medium for this purpose implements signal processing including format processing through which the image data of an image captured at an image-capturing device are formatted for recording, various types of pre-treatment that are implemented prior to the format processing and recording processing through which the image data having undergone format processing are recorded, with signal processing during the pre-treatment performed in units of individual lines in line sequence on image data corresponding to N lines ⁇ M rows and signal processing during the format processing performed in units of individual blocks corresponding to n lines ⁇ m rows (N>n, M>m) in block sequence on the image data having undergone the pre-treatment.
  • the digital camera according to the present invention may comprise an image-capturing device that captures a subject image having passed through a taking lens and outputs image data, a recording processing circuit that performs recording processing on the image data and an image processing circuit that, with the image data output by the image-capturing device input as data corresponding to n lines ⁇ m rows, calculates a color difference signal based upon the image data thus input, performs interpolation processing and low pass filtering processing at once on the color difference signal using filter coefficients for interpolation/low pass filtering and then generates a formatted signal by performing matrix processing corresponding to the type of recording implemented at the recording processing circuit.
  • the image processing performed in this digital camera may be implemented on a computer.
  • the program stored in a storage medium for this purpose executes format processing that formats the image data of an image captured at an image-capturing device for recording, in which color difference signals corresponding to n lines ⁇ m rows are calculated based upon the image data that are input, interpolation processing and low pass filtering processing are executed at once on the color difference signals corresponding to n lines ⁇ m rows using filter coefficients for interpolation/low pass filtering and then a formatted signal is generated through matrix processing and recording processing through which the image data having undergone the format processing are recorded.
  • the digital camera which may comprise an image-capturing device that captures a subject image having passed through a taking lens and outputs image data, an image processing circuit that performs image processing including data format processing appropriate for data compression on the image data output by the image-capturing device and a compression circuit that compresses image data output by the image processing circuit, the image processing circuit engages in median processing on image data corresponding to n ⁇ m pixel areas to execute the format processing.
  • the image processing performed in this digital camera may be implemented on a computer.
  • the program stored in a storage medium for this purpose implements signal processing including format processing through which the image data of an image captured at the image-capturing device are formatted for compression, various types of signal processing that are implemented prior to the format processing and compression processing through which image data having undergone the format processing are compressed, with median processing performed on image data corresponding to n ⁇ m pixel areas during the format processing.
  • the digital camera according to the present invention may comprise an image-capturing device that captures a subject image having passed through a taking lens and outputs image data and an image processing circuit that executes image processing on the image data output by the image-capturing device, in which median processing is implemented on (n-i) ⁇ (m-j) sets of image data extracted from image data corresponding to an n ⁇ m pixel area.
  • the image processing performed in this digital camera may be implemented on a computer.
  • the program stored in a storage medium for this purpose implements a specific type of image processing on the image data of an image captured at the image-capturing device, in which median processing is executed on (n-i) ⁇ (m-j) sets of image data extracted from image data corresponding to an n ⁇ m pixel area.
  • the digital camera according to the present invention may comprise an image-capturing device that captures a subject image that passes through a taking lens and outputs image data, a white balance adjustment circuit that performs white balance adjustment on the image data output by the image-capturing device, a white balance fine adjustment coefficient calculation circuit that calculates a white balance fine adjustment coefficients based upon image data having undergone the white balance adjustment, output by the white balance adjustment circuit and a white balance fine adjustment circuit that performs white balance fine adjustment using the white balance fine adjustment coefficients on image data having undergone the white balance adjustment output by the white balance adjustment circuit.
  • the image processing implemented in this digital camera may be executed by a computer.
  • the program stored in a storage medium for this purpose implements white balance adjustment processing in which white balance adjustment is performed on the image data of an image-captured at an image-capturing device, white balance fine adjustment coefficient calculation processing in which white balance fine adjustment coefficients are calculated using image data having undergone the white balance adjustment through the white balance adjustment processing and white balance fine adjustment processing in which white balance fine adjustment is performed using the white balance fine adjustment coefficients on the image data having undergone the white balance adjustment.
  • the white balance fine adjustment coefficients are calculated based upon the average values calculated for the R, B and G signals in the image data having undergone the white balance adjustment. Alternatively, it may be calculated based upon the histograms of the brightness levels calculated for the R, B and G signals of the image data having undergone the white balance adjustment.
  • the digital camera according to the present invention may comprise an image-capturing device that captures a subject image that passes through a taking lens and outputs image data, a white balance adjustment circuit that performs white balance adjustment on the image data output by the image-capturing device, an image area selection apparatus that selects one image area among a preset plurality of image areas, a white balance fine adjustment coefficient calculation circuit that calculates white balance fine adjustment coefficients using image data within an area set in relation with the one image area selected by the image area selection apparatus, among the image data having undergone the white balance adjustment output by the white balance adjustment circuit, and a white balance fine adjustment circuit that performs white balance fine adjustment using the white balance fine adjustment coefficients calculated at the white balance fine adjustment coefficient calculation circuit.
  • the white balance fine adjustment coefficients are calculated by selecting image data in an image area related to the focal point detection area selected by the focal point detection area selection apparatus.
  • the image processing performed in the digital camera may be executed on a computer.
  • the program stored in a storage medium for this purpose implements white balance adjustment processing in which white balance adjustment is performed on an image captured at the image-capturing device, image area selection processing in which one of a preset plurality of image areas is selected, white balance fine adjustment coefficient calculation processing in which white balance fine adjustment coefficients are calculated using image data within an area set in relation to the one image area selected through the image area selection processing from image data having undergone white balance adjustment through the white balance adjustment processing and white balance fine adjustment processing in which white balance fine adjustment is performed on the image data having undergone the white balance adjustment using the white balance fine adjustment coefficients.
  • Another object of the present invention is to provide a carrier wave encoded to transmit a control program for white balance adjustment on image data.
  • the control program includes instructions for; white balance adjustment processing in which white balance adjustment is performed on image data of an image captured at an image-capturing device; image area selection processing in which an image area is selected from a preset plurality of image areas; white balance fine adjustment coefficient calculation processing in which white balance fine adjustment coefficients are calculated using image data within an area set in relation to the image area selected through the image area selection processing; and white balance fine adjustment processing in which white balance fine adjustment is performed on image data having undergone white balance adjustment using white balance fine adjustment coefficients.
  • Another object of the present invention is to provide an electronic camera that is capable of reducing the length of signal processing time while allowing selection to be made between recording irreversible image data and recording raw data.
  • the electronic camera comprises an image-capturing device, a first signal processing unit that performs, at least, A/D conversion on an image signal generated by the image-capturing device to convert the signal to digital image data, a second signal processing unit that performs irreversible signal processing on the image data resulting from a conversion performed at the first signal processing unit, an image memory capable of temporarily storing the image data and an operation control unit that dynamically selects signal paths between the two signal processing units in correspondence to the operating mode set to (1) or (2) below.
  • a fast mode in which a sequence of signal processing is continuously executed by providing an output from the first signal processing unit to the second signal processing unit and causing the two signal processing units to engage in synchronous operation.
  • An original image mode in which an output from the first signal processing unit is stored in the image memory, image data read out from the image memory are provided to the second signal processing unit and the two signal processing unit are each made to operate with their own timing.
  • the “storage area in the image memory provided to store the output from the first signal processing unit in the original image mode” can be utilized by the operation control unit as a buffer area where image data undergoing processing are kept in retreat in the fast mode.
  • the operation control unit accepts an external operation indicating whether or not raw data, i.e., image data before undergoing irreversible signal processing at the second signal processing unit, are required, selects and executes the fast mode if the external operation indicates that no raw data are required, and selects and executes the original image mode if the external operation indicates that raw data are required to output the raw data present in the image memory to the outside or to store the raw data present in the image memory at a recording medium.
  • the operation clock at the second signal processing unit be set faster than the operation clock at the first signal processing unit by the operation control unit.
  • the second signal processing unit may be a unit that engages in, at least, either “irreversible gradation conversion” or “irreversible pixel thinning.”
  • FIG. 1 illustrates the structure of an embodiment of a single lens reflex electronic still camera
  • FIG. 2 is a block diagram of an embodiment of the signal processing system in the single lens reflex electronic still camera
  • FIG. 3 is a block diagram illustrating the circuit that performs line processing in the signal processing system shown in FIG. 2 ;
  • FIG. 4 is a block diagram illustrating the circuit that performs block processing in the signal processing system shown in FIG. 2 ;
  • FIG. 5 illustrates the color filter array
  • FIG. 6 shows an example of focal point detection area positional arrangement
  • FIG. 7 illustrates the focal point detection device
  • FIGS. 8A-8C illustrate histograms of R, G and B brightness
  • FIG. 9 illustrates the details of processing performed at the G interpolation circuit
  • FIG. 10 illustrates the details of the processing performed at the band-pass filter
  • FIG. 11 illustrates the details of the processing performed at the low pass filter
  • FIG. 12 illustrates the details of the processing performed at the color difference signal generation circuit
  • FIG. 13 illustrates an example of data processed at the interpolation/LPF circuit
  • FIG. 14 illustrates the details of the processing performed at the interpolation/LPF circuit
  • FIG. 15 illustrates the details of the processing performed at the median circuit
  • FIG. 16 is a flowchart of a program started up by the half-press switch
  • FIG. 17 is a block diagram of the JPEG format processing achieved through line processing instead of block processing.
  • FIG. 18 is a block diagram of a configuration that allows image processing to be performed by taking raw image data into a personal computer.
  • FIG. 19 is a block diagram illustrating the structure of the electronic camera in another embodiment of the present invention.
  • FIG. 20 illustrates the signal path through which signals travel when the electronic camera in FIG. 19 is set in the fast mode
  • FIG. 21 illustrates the signal path through which signals travel when the electronic camera in FIG. 19 is set in the original image mode.
  • the single lens reflex electronic still camera in this embodiment is provided with a camera main body 70 , a viewfinder device 80 which is attached to or detached from the camera main body 70 and an interchangeable lens 90 internally provided with a taking lens 91 and an aperture 92 , that is attached to or detached from the camera main body 70 .
  • Subject light passes through the interchangeable lens 90 to enter the camera main body 70 and is guided to the viewfinder device 80 by a quick return mirror 71 which is at the position indicated by the dotted line before a release to form an image at a viewfinder mat 81 and also to form an image at a focal point detection device 36 .
  • the subject image is further guided to an ocular lens 83 by a pentaprism 82 .
  • the quick return mirror 71 rotates to the position indicated by the solid line and the subject light forms an image on an image-capturing device 71 via a shutter 72 .
  • the subject image Prior to the release, the subject image enters a white balance sensor 86 through a prism 84 and an image-forming lens 85 so that the color temperature of the subject image is detected.
  • FIG. 2 is a circuit block diagram of the embodiment.
  • a half-press signal and a full-press signal from a half-press switch 22 and a full-press switch 23 respectively both interlocking with a release button are input to a CPU 21 .
  • the focal point detection device 36 detects the focal adjustment state at the taking lens 91 in response to a command issued by the CPU 21 and drives the taking lens 91 to the focus matching position so that the subject light entering the interchangeable lens 90 forms an image on the image-capturing device 73 .
  • the focal point detection device 36 detects the state of focal adjustment for a focal point detection area at the center of the photographic image plane and each of the four focal point detection areas set to the left, to the right, above and below the central focal point detection area, and drives the taking lens 91 to the focus matching position based upon the focal adjustment status detected from the focal point detection area that has been selected based upon a preset algorithm.
  • the drive of a CCD 26 of the image-capturing device 73 is controlled via a timing generator 24 and a driver 25 .
  • the timing generator 24 controls the operating timing of an analog processing circuit 27 and an A/D conversion circuit 28 .
  • a white balance detection processing circuit 35 starts driving in response to a signal provided by the CPU 21 .
  • the quick return mirror 71 rotates upward, the subject light from the interchangeable lens 90 forms an image on the photosensitive surface of the CCD 26 and the signal charge that corresponds to the brightness of the subject image is stored at the CCD 26 .
  • the signal charge thus stored at the CCD 26 is caused to be swept out by the driver 25 and is input to the analog signal processing circuit 27 that includes an AGC circuit and a CDS circuit.
  • analog processing such as gain control and noise removal is performed on an analog image signal at the analog signal processing circuit 27
  • the signal is converted to a digital signal at the A/D conversion circuit 28 .
  • the signal achieved through the digital conversion is supplied to an image processing circuit 29 which may be constituted as, for instance, an ASIC, where the signal undergoes an image pre-treatment including white balance adjustment, profile compensation and gamma control.
  • the white balance detection processing circuit 35 includes a white balance sensor 35 A (the white balance sensor 86 in FIG. 1 ) constituted as a color temperature sensor, an A/D conversion circuit 35 B which converts the analog signal output by the white balance sensor 35 A to a digital signal and a CPU 35 C that generates a white balance adjustment signal based upon a digital color temperature signal.
  • the white balance sensor 35 A which may be constituted of, for instance, a plurality of photoelectric conversion devices each demonstrating sensitivity to red color R, blue color B or green color G, receives the optical image of the entire photographic field.
  • the photosensitive area of the CCD may be divided into 16 areas with a plurality of elements demonstrating sensitivity to R, G and B arrayed in each area.
  • the CPU 35 C calculates the R gain and the B gain based upon the outputs from the plurality of photoelectric conversion devices. These gains are transferred to a specific register at the CPU 21 and are stored there.
  • the image data that have undergone the image pre-treatment further undergo format processing (image post-treatment) for JPEG compression and then the image data are temporarily stored in a buffer memory 30 .
  • the image data stored in the buffer memory 30 are processed into display image data at a display image generation circuit 31 and are displayed on an external monitor 32 such as an LCD as the results of photographing.
  • the image data stored in the buffer memory 30 undergo data compression at a specific rate through the JPEG method at a compression circuit 33 and are recorded in a storage medium (PC card) 34 such as a flash memory.
  • PC card storage medium
  • FIGS. 3 and 4 are block diagrams illustrating the details of the image processing circuit 29 .
  • FIG. 3 shows a line processing circuit 100 that performs signal processing on the image data provided by the CCD 26 in units of individual lines, which undertakes the image pre-treatment described above.
  • FIG. 4 illustrates a block processing circuit 200 that performs signal processing on image data having undergone the signal processing at the line processing circuit 100 , in units of blocks corresponding to 20 ⁇ 20 pixel areas, 16 ⁇ 16 pixel areas, 12 ⁇ 12 pixel areas or 8 ⁇ 8 pixel areas, which undertakes the image post-treatment described above.
  • the image processing circuit 29 is actually realized in software by employing a plurality of processors, it is explained as hardware in this specification to facilitate explanation.
  • the line processing circuit 100 in FIG. 3 performs various types of signal processing that are to be detailed later on 12-bit R, G and B signals output by the A/D conversion circuit 28 and is provided with a defect correction circuit 101 , a digital clamp circuit 102 , a gain circuit 103 , a white balance circuit 104 , a black level circuit 105 , a gamma correction circuit 106 and an average value/histogram calculation circuit 107 .
  • the defect correction circuit 101 corrects the data output from a pixel with a defect (specified in advance with its address set in a register) in the output of the CCD 26 in units of individual lines in pixel sequence.
  • the digital clamp circuit 102 subtracts the weighted average of a plurality of sets of pixel data that are used as so-called optical black from each set of pixel data in a given line in the output from the CCD 26 in units of individual lines in pixel sequence.
  • the gain circuit 103 uniformly applies a specific gain to each of the R, G and B signals output by the CCD 26 in units of individual lines in pixel sequence, implements inconsistency correction with regard to the sensitivity of the CCD 26 for the G signal and also implements inconsistency correction with regard to the sensitivity ratio of the CCD 26 for the R and B signals.
  • the white balance circuit 104 multiplies the R signal and the B signal in the output from the CCD 26 by the R gain and the B gain which constitute the white balance adjustment coefficients set in advance and stored in the register at the CPU 21 as explained earlier, in units of individual lines in pixel sequence. According to the present invention, as is to be explained later, gain for white balance fine adjustment is calculated based upon the image data corrected at the white balance circuit 104 to perform fine adjustment of the white balance.
  • the black level circuit 105 adds a value set in advance and stored in the register at the CPU 21 to the R, G and B signals in the output from the CCD 26 in units of individual lines in pixel sequence.
  • the gamma correction circuit 106 performs gamma correction on the output from the CCD 26 in units of individual lines in pixel sequence by using a gradation look-up table. It is to be noted that through the gamma correction, the 12-bit R, G and B signals are each converted to 8-bit data.
  • the average value/histogram calculation circuit 107 extracts image data corresponding to, for instance, 512 ⁇ 512 areas with the area at its center selected as a focal point detection area among the image data corresponding to the entire area that have undergone the gamma correction and calculates a gain RF-gain for white balance fine adjustment of the R signal and a gain BF-gain for white balance fine adjustment of the B signal using, for instance, the following formulae (1) and (2).
  • the gains RF-gain and BF-gain are stored in the register. For instance, if color filters are provided on the 512 ⁇ 512 pixel area as illustrated in FIG.
  • the average values of the R signal, the G signal and the B signal may be calculated using formulae (3) ⁇ (5), to calculate the gains RF-gain and BF-gain for white balance fine adjustment using the ratio of the G signal average value Gave and the R signal average value Rave and the ratio of the G signal average value Gave and the B signal average value Bave as indicated in formulae (1) and (2)
  • Gave G sum/number of G pixels (4)
  • the gradation average values of the R signals, the G signals and the B signals in the image data are determined, which has proved through experience to improve the results of white balance adjustment (the overall white balance).
  • FIG. 6 illustrates an example of a positional arrangement of focal point detection areas.
  • an area AC located at the center of the image-capturing image plane, an area AR to the right viewed by the photographer, an area AL to the left, an area AU on the upper side and an area AD on the lower side are provided.
  • One of these areas is selected based upon a preset algorithm and image data corresponding to 512 ⁇ 512 areas with the selected area located at the center are extracted.
  • the gain RF-gain for white balance fine adjustment of the R signal and the gain BF-gain for white balance fine adjustment of the B signal are calculated as described earlier.
  • the focal point detection device 36 comprises an infrared light blocking filter 700 , a visual field mask 900 , a field lens 300 , an opening mask 400 , image reforming lenses 501 and 502 , an image sensor 310 and the like.
  • An area 800 is the exit pupil of a taking lens 91 (see FIG. 1 ).
  • areas 801 and 802 the image achieved by reverse-projecting the opening portions 401 and 402 bored at the opening mask 400 on the area 800 by using the field lens 300 , are present. It is to be noted that in FIG.
  • the infrared light blocking filter 700 may be located either on the right side or the left side of the field mask 900 .
  • the light fluxes entering via the areas 801 and 802 form a focal point on an image-capturing device equalizing surface 600 , then travel through the infrared light blocking filter 700 , the field mask 900 , the field lens 300 , the opening portions 401 and 402 and the image reforming lenses 501 and 502 and form an image on image sensor arrays 310 a and 310 b.
  • the pair of subject images formed on the image sensor arrays 310 a and 310 b move close to each other in a so-called front pin state in which a sharp image of the subject is formed by the taking lens 91 further frontward (toward the subject) relative to the image-capturing device equalizing surface 600 , whereas they move further away from each other in a so-called rear pin state in which a sharp image of the subject is formed further rearward relative to the image-capturing device equalizing surface 600 .
  • the subject images formed on the image sensor arrays 310 a and 310 b are away from each other by a specific distance, a sharp image of the subject is located on the image-capturing device equalizing surface 600 .
  • the focal adjustment status at the taking lens 91 can be calculated by converting the pair of subject images to electrical signals through photoelectric conversion performed at the image sensor arrays 310 a and 310 b and determining the relative distance between the pair of subject images through arithmetic processing on the signals.
  • This focal adjustment status is calculated as the quantity of misalignment that indicates the direction in which and the distance over which the position of a sharp image formed by the taking lens 91 is located relative to the image-capturing device equalizing surface 600 .
  • the area in which the images on the image sensor arrays 310 a and 310 b which are projected in reverse by the image reforming lenses 501 and 502 overlap each other in the vicinity of the image-capturing device equalizing surface 600 corresponds to a focal point detection area. Through this method, the focal point is detected for each of the five areas within the photographic image plane.
  • the focal point detection device 36 makes a decision as to which selection area is to be selected for acquisition of focal point information during the actual image-capturing operation after focal points have been detected for the individual areas as explained above. For instance, the area in which the subject closest to the camera is captured may be selected among the plurality of areas. Then, the focal point detection data are utilized in a focal matching operation while image-capturing is in progress. In addition, the image data corresponding to 512 ⁇ 512 areas with the selected focal point detection area located at the center are extracted from the output signal from the white balance sensor 35 A. Based upon the image data thus extracted, the gain RF-gain for white balance fine adjustment of the R signal and the BF-gain for white balance fine adjustment of the B signal are calculated.
  • the gains RF-gain and BF-gain for white balance fine adjustment may be calculated as described below, based upon histograms of the brightness levels of the R, G and B signals calculated at the average value/histogram calculation circuit 107 .
  • the average value/histogram calculation circuit 107 calculates histograms of the brightness levels of the R, G and B signals. In other words, it obtains histograms as illustrated in FIGS. 8A-8C by calculating the quantities corresponding to individual brightness levels for the various colors.
  • a 95% level value is a brightness level value corresponding to the number of dots or pixels that is 95% of the entire number of G-signal dots.
  • the block processing circuit 200 in FIG. 4 which is constituted of a white balance fine adjustment circuit 210 and an interpolation/profile processing circuit 220 , engages in various types of signal processing in units of n ⁇ m sets of pixel data, i.e., in blocks.
  • the white balance fine adjustment circuit 210 performs white balance fine adjustment on R signals and B signals that have undergone the processing performed at the gamma correction circuit 106 and are stored in the buffer memory 30 , by multiplying the R and B signals in each 20 ⁇ 20 pixel area block with the gains RF-gain and BF-gain for white balance fine adjustment calculated at the average value circuit 107 .
  • the interpolation/profile processing circuit 220 is provided with a G interpolation circuit 221 , a band pass filter (BPF) 222 , a clip circuit 223 , a gain circuit 224 , a low pass filter (LPF) 225 , a color difference signal generation circuit 226 , an interpolation/low pass filter (LPF) circuit 228 , a matrix circuit 229 , an adder 230 and a median circuit 232 .
  • the interpolation/profile processing circuit 220 performs format processing for JPEG data compression for individual data blocks corresponding to 20 ⁇ 20 pixel areas in the image data having undergone white balance fine adjustment to generate Y signals corresponding to 8 ⁇ 8 pixel areas and Cb signals and Cr signals each corresponding to 8 ⁇ 8 pixel areas.
  • a brightness signal Y contains a brightness signal Y 1 indicating the low frequency component of the G signal and a profile extraction signal Y 2 corresponding to the high frequency component of the G signal, as will be explained later.
  • Block signals corresponding to 20 ⁇ 20 pixel areas output from the white balance adjustment circuit 210 are input to the G interpolation circuit 221 where the G component of each pixel area corresponding to an R signal or a B signal is calculated through an interpolation operation for the data corresponding to the central 16 ⁇ 16 pixel area.
  • the G component at the vacant lattice point (the pixel at line 3 , row 3 , where a B signal is obtained) at the center of D 51 representing a 5 ⁇ 5 pixel area data block (line 1 , row 1 ⁇ line 5 , row 5 ) is calculated for input data D 20 corresponding to 20 ⁇ 20 pixel areas.
  • This value is used as a substitute for the G component of the pixel (encircled B) at line 3 , row 3 in output data D 16 corresponding to 16 ⁇ 16 pixel areas.
  • the G component at the vacant lattice point (the pixel at line 4 , row 4 , where an R signal is obtained) at the center of data D 52 representing a 5 ⁇ 5 pixel area block (line 2 , row 2 ⁇ line 6 , row 6 ) is used as a substitute for the input data D 20 corresponding to the 20 ⁇ 20 pixel areas, and this value is converted as the G component of the pixel (encircled R) at line 4 , row 4 in the output data D 16 corresponding to 16 ⁇ 16 pixel areas.
  • G interpolation processing is implemented for all the vacant lattice points in the 16 ⁇ 16 pixel area so that the output data D 16 are obtained.
  • the band pass filter 222 extracts the intermediate frequency component (a frequency component that is high enough to allow extraction of the subject profile and is referred to as the high frequency component for convenience) in the G signal in the 12 ⁇ 12 pixel area block output by the G interpolation circuit 221 .
  • BPF output data are obtained by multiplying data corresponding to a 5 ⁇ 5 pixel area D 5 (line 5 , row 5 ⁇ line 9 , row 9 ) with band pass filter coefficients in input data D 12 corresponding to the 12 ⁇ 12 pixel areas, and the value of the BPF output data is used as a substitute for data (bold letter G) at line 7 , row 7 in the output data D 8 corresponding to an 8 ⁇ 8 pixel area block.
  • all the pixel data in the 8 ⁇ 8 pixel area block are converted to G data that have undergone BPF, to generate output data D 8 .
  • the clip circuit 223 clips and cuts each set of data D 8 corresponding to an 8 ⁇ 8 pixel area block output by the band pass filter 222 at a preset level.
  • the gain circuit 224 multiplies the output from the clip circuit 223 with a preset gain.
  • the low pass filter 225 extracts the low frequency component in the G signals in the 12 ⁇ 12 pixel area block output by the G interpolation circuit 221 .
  • LPF output data are obtained by multiplying the 5 ⁇ 5 pixel area data D 5 (line 5 , row 5 ⁇ line 9 , row 9 ) in the input data D 12 corresponding to the 12 ⁇ 12 pixel areas with a low pass filter coefficients, and the value of the LPF output data is substituted for data at line 7 , row 7 (hatched area) in the output data D 8 corresponding to the 8 ⁇ 8 pixel area block.
  • all the pixel data corresponding to the 8 ⁇ 8 pixel area block are used as a substitute for the G data that have undergone LPF, to generate output data D 8 .
  • the color difference signal generation circuit 226 generates intermediate data D 16 - 3 that contain (B-G) signals and (R-G) signals based upon RGB signal input data D 16 - 1 corresponding to a 16 ⁇ 16 pixel area block, which are the output from the white balance fine adjustment circuit 210 and G signal input data D 16 - 2 corresponding to the 16 ⁇ 16 pixel area block, which are the output from the G interpolation circuit 221 .
  • it separates the intermediate data D 16 - 3 into (B-G) color difference signal output data D 16 - 4 and (R-G) color difference signal output data D 16 - 5 .
  • the interpolation/LPF circuit 228 8-bit (B-G) signals and (R-G) signals corresponding to 16 ⁇ 16 pixel areas are input to the interpolation/LPF circuit 228 from the color difference signal generation circuit 226 to enable interpolation calculation at the interpolation/LPF circuit to obtain (B-G) signals and (R-G) signals in units of 5 ⁇ 5 pixel area blocks, the interpolation/LPF circuit also performs low pass filtering processing to extract a low band signal and outputs the resulting (B-G) signals and (R-G) signals corresponding to the 12 ⁇ 12 pixel areas to Cb, Cr matrix portions of the matrix circuit 229 . In addition, it outputs (B-G) signals and (R-G) signals corresponding to 8 ⁇ 8 pixel areas to a Y matrix portion of the matrix circuit 229 .
  • kc1 ⁇ kc9, and Ktr-g each represents a coefficient
  • kc1 ⁇ kc9, and Ktr-g each represents a coefficient
  • the following restriction is imposed in regard to the filter coefficient.
  • the explanation here is given in one-dimensional terms for purposes of simplification. Let us now consider a situation in which an actual sample point is present in N cycles among interpolated sample points, e.g., a, a, b, b, a, a, b, b, (a represents an actual sample point and b represents a sample point to be interpolated.
  • an actual sample point is present in four cycles.
  • the sample points are to be interpolated using an odd-number degree symmetrical digital filter of degree (2n+1) ((2n+1) is larger than N)
  • the sample points after the interpolation too, must be uniform if the actual sample points are uniform, the following restrictions in regard to the filter coefficients are applied.
  • i represents an integer equal to or greater than 0 which contains the filter coefficient equal to or less than 2n+1 and k represents an integer smaller than n and equal to or greater than 0.
  • (R-G) signals in input data D 16 corresponding to 16 ⁇ 16 pixel areas, (R-G) data D 5 corresponding to a 5 ⁇ 5 pixel area block (line 3 , row 3 ⁇ line 7 , row 7 ) are multiplied with interpolation/LPF filter coefficients to calculate (R-G) data representing the central area (at line 5 , row 5 ), and these (R-G) data are used as a substitute for data in output data D 12 corresponding to a 12 ⁇ 12 pixel area block.
  • the interpolation/LPF processing is performed on all the pixel data corresponding to the 12 ⁇ 12 pixel area block as far as the (R-G) signals are concerned so that output data D 12 are obtained. Similar processing is performed for the (B-G) signals, as well, to generate output data corresponding to the 12 ⁇ 12 pixel area block.
  • the matrix circuit 229 is constituted of the Y matrix portion, the Cb matrix portion and the Cr matrix portion.
  • the Y matrix portion to which (B-G) signals and (R-G) signals corresponding to the 8 ⁇ 8 pixel area block are input from the interpolation/LPF circuit 228 and G signals corresponding to the 8 ⁇ 8 pixel area block are input from the low pass filter 225 , generates brightness signals Y 1 each having a low frequency component corresponding to an 8 ⁇ 8 pixel area through the following formula (7).
  • Y 1( i,j ) [ Mkg ⁇ G ( i,j )+ Mkr 1 ⁇ R - G ( i,j )+ Mkb 1 ⁇ B - G ( i,j )] (7)
  • Mkg, Mkr1 and Mkb1 each represents a matrix coefficient.
  • the Cb matrix portion and the Cr matrix portion to which (B-G) signals and (R-G) signals corresponding to the 12 ⁇ 12 pixel area block are respectively input from the interpolation/LPF circuit 228 , generate Cb signals and Cr signals corresponding to the 12 ⁇ 12 pixel area block through the following formulae (8) and (9).
  • Mkr2, Mkr3, Mkb2 and Mkb3 each represents a matrix coefficient.
  • the adder 230 adds together the brightness signal Y 1 with the low frequency component corresponding to one of the 8 ⁇ 8 pixel areas output by the matrix circuit 229 and a profile extraction signal Y 2 with the high frequency component corresponding to the 8 ⁇ 8 pixel areas output by the gain circuit 224 .
  • the profile extraction signal Y 2 output by the gain circuit 224 is obtained by extracting only the high-frequency component in the G signal in a 16 ⁇ 16 pixel area having undergone the G interpolation, i.e., by extracting the profile.
  • the brightness/profile extraction signals Y (Y 1 +Y 2 ) for the entire image are calculated.
  • the results of the addition are stored in the buffer memory 30 .
  • the median circuit 233 to which Cb signals and Cr signals corresponding to 12 ⁇ 12 pixel areas output by the matrix circuit 229 are input, engages in median processing which is performed by using 9 points, i.e., 3 ⁇ 3 pixels contained in the 5 ⁇ 5 pixel area block to output Cr signals and Cb signals corresponding to 8 ⁇ 8 pixels.
  • median filtering processing is performed on 9 sets of data (indicated by X) corresponding to 3 ⁇ 3 pixels and contained in data D 3 - 5 corresponding to the 5 ⁇ 5 pixel areas (line 5 , row 5 ⁇ line 9 , row 9 ) in the 12 ⁇ 12 pixel data D 12 (indicated by the black dots) as illustrated in FIG. 15 .
  • the 9 sets of data are sorted in ascending order or descending order and the central value is used as median processing data.
  • the median processing data thus obtained are used as a substitute for data corresponding to line 7 , row 7 in the output data D 8 corresponding to 8 ⁇ 8 pixels.
  • output data D 8 corresponding to the 8 ⁇ 8 pixels are generated for both the Cb signals and the Cr signals.
  • the output data D 8 with the Cb signals and the Cr signals are stored in the buffer memory 30 .
  • the JPEG compression circuit 33 repeats the process in which a single unit of YCrCb signals formatted to correspond to the 8 ⁇ 8 pixels to facilitate the JPEG compression method based upon the Y signals corresponding to 8 ⁇ 8 pixels generated by the adder circuit 230 and the Cr signals and the Cb signals corresponding to the 8 ⁇ 8 pixels generated by the median circuit 232 is extracted from input data corresponding to each 20 ⁇ 20 pixel area block input to the block processing circuit 20 and the extracted data are compressed through the procedure in the known art, until the entire image is compressed.
  • the compressed image data are stored in the PC card 34 via the CPU 21 .
  • step S 20 A the focal point detection device 36 detects the focal adjustment status for each focal point detection area in step S 20 A. If it is decided in step S 20 B that the full-press switch 23 has been operated, the quick return mirror swings upward, and the program that implements the photographing sequence in FIG. 16 is executed.
  • step S 21 each pixel at the CCD 26 stores a light-reception signal and when the storage is completed, the electrical charges stored at all the pixels are sequentially read out.
  • step S 22 the image data that have been read out undergo the processing performed at the analog signal processing circuit 27 and then are converted to digital image data at the A/D conversion circuit 28 to be input to the image processing circuit 29 .
  • step S 23 processing such as white balance adjustment, gamma gradation control and JPEG formatting processing is performed at the image processing circuit 29 .
  • step S 24 processing such as white balance adjustment, gamma gradation control and JPEG formatting processing is performed at the image processing circuit 29 .
  • step S 24 the operation proceeds to step S 24 to temporarily store the image data having undergone the image processing in the buffer memory 30 .
  • step S 25 the image data are read from the buffer memory 30 and the data are compressed at the JPEG compression circuit 33 .
  • step S 26 the compressed image data are stored in the PC card 34 .
  • buffer memories BM 1 ⁇ BM 4 each corresponding to four lines will be required for the G interpolation processing, the BPF processing, the interpolation/LPF processing and the median processing circuit, as illustrated in FIG. 17 , which will obviously result in an increase in the circuit scale.
  • the processing is realized in hardware, a reduction in size and cost are realized. Since line processing instead of block processing is implemented to perform pipeline arithmetic operation which is executed in units of individual pixels and in units of individual lines, the pipeline arithmetic operation can be performed quickly, as in the prior art.
  • the white balance fine adjustment can be implemented based upon image data containing the area so that any occurrence of color-casted image can be prevented.
  • the interpolation/LPF circuit 228 performs interpolation calculation for (B-G) signals and (R-G) signals and also performs low pass filtering processing to extract the low frequency components at the same time, the length of time required for the processing is reduced compared to a method in which signals are processed in the order of the interpolation processing, the matrix processing and the LPF processing to suppress false colors and color moire.
  • the line processing circuit 100 or the block processing circuit 200 may be realized in the form of software by storing an image processing program in a storage medium such as a CD ROM or a floppy disk which can be utilized when performing image processing on a personal computer.
  • image data that have undergone image-capture at the CCD and digitization should be stored in a large-capacity image data storage medium, and with this storage medium set in a personal computer to take in the image data, the line processing or the block processing described earlier should be performed using the image processing program.
  • the output data from the black level circuit 105 in FIG. 3 may be stored as raw data at the PC card 34 so that image processing can be performed on the raw data by setting the PC card 34 in the personal computer.
  • FIG. 18 is a block diagram illustrating a configuration for using a personal computer to perform image processing as described above and to store the data in a storage device.
  • Raw data of an image which has been captured in advance (output data from the black level circuit 105 , for instance) are taken into a hard disk device 92 via an I/F circuit 91 .
  • a program for implementing the image processing described above via the I/F circuit 91 is stored in the hard disk device 92 .
  • the program may be stored in any of a variety of storage media, and by setting such a storage medium in a driver (not shown), the program is taken into the hard disk device 92 .
  • a program may be downloaded via the internet.
  • Image processing as described above is performed by the personal computer 93 in FIG. 18 so that the image can be displayed on a monitor 94 or can be printed out by a printer 95 .
  • Compressed image data are stored in the hard disk device 92 .
  • the program When performing image processing on a personal computer as described above, the program should be structured so that if the image data stored in the image data storage medium have already undergone white balance adjustment, only white balance fine adjustment processing is to be performed.
  • information in regard to the focal point detection area utilized for the focus matching operation of the taking lens among the preset plurality of focal point detection areas should be also stored in the image data storage medium, so that the information can be utilized when selecting data corresponding to an image area related to the focal point detection area during the image processing performed on the personal computer.
  • the program should be structured so that both the white balance adjustment processing and the white balance fine adjustment processing are implemented.
  • the image-capturing data from the CCD, the color temperature information with respect to the subject detected at the white balance sensor 86 ( 35 A) and the information with respect to the focal point detection area described above should also be stored in the image data storage medium so that the white balance adjustment processing and the white balance fine adjustment processing can be performed based upon these data.
  • the invention further includes, as another aspect, the control program (described above) that can be executed by the controller (e.g., a computer) to control the image processing apparatus as described above.
  • the control program can be implemented in an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the control program can be transmitted by a carrier wave over a communications network such as, for example, the World Wide Web, and/or transmitted in a wireless fashion, for example, by radio waves or by infrared waves.
  • the control program can also be transmitted by a carrier wave from a remote storage facility to a local control unit.
  • the local control unit interacts with the remote storage facility to transfer all or part of the program, as needed, for execution by the local unit.
  • the control program can be fixed in a computer-readable recording medium such as, for example, a CD-ROM, a computer hard drive, RAM, or other types of memories that are readily removable or intended to remain fixed within the computer.
  • the present invention may be adopted in an electronic still camera which does not allow lens exchange or in a digital video camera that is capable of taking in dynamic images as well.
  • the present invention may be adopted when other compression methods are used.
  • the other compression methods referred to here include compression achieved through the TIFF method, compression achieved through the Fractal method and compression achieved through the MPEG method.
  • the format processing as mentioned in this specification is not restricted to the format processing performed prior to the various types of compression processing described above, and may include non-compression TIFF format processing as well.
  • circuit structures in the embodiments explained above merely represent examples and the circuit structure may assume the following modes, for instance.
  • the algorithm used for this selection process is not restricted to this example.
  • the photographer may manually select one focal point detection area among the five focal point detection areas.
  • white balance fine adjustment coefficients may be calculated using image data for a specific area with an area corresponding to a photometric area selected from a plurality of photometric areas located at its center.
  • an area may be specified using a touch sensor on, for instance, a monitor screen, so that white balance fine adjustment coefficients are calculated for the image data within a specific area defined based upon the image data within the specified area to perform white balance fine adjustment using the white balance fine adjustment coefficients on the next image data sampled.
  • the raw data are constituted of the 8-bit RGB data transmitted from a stage preceding the gamma control circuit 106 in FIG. 3 , e.g., the white balance circuit 104 or the black level circuit 105 , to the buffer memory 30 .
  • the irreversible compressed data are obtained by compressing the raw data through the JPEG method by using the brightness Y data, the color difference Cr, Cb data output by the block processing circuit 200 .
  • FIG. 19 is a structural block diagram of an electronic camera 310 that is capable of recording data in the two different data formats described above in another embodiment.
  • a taking lens 91 is mounted at the electronic camera 310 .
  • a light-receiving surface of an image-capturing device 311 is placed in the image space of the taking lens 91 .
  • a timing generator 312 supplies a control pulse for controlling the storage, the discharge, the read and the like of the electronic charges to the image-capturing device 311 .
  • Image data output by the image-capturing device 311 are input to an image signal processor 314 via an A/D conversion unit 313 .
  • the timing generator 312 supplies the A/D conversion unit 313 and the image signal processor 314 with an operation clock OA.
  • the functions of the image signal processor 314 are achieved by adopting a configuration constituted of a signal level correction unit 315 , a white balance adjustment unit 316 , a gamma control unit 317 , a color interpolation unit 318 , a color difference conversion unit 319 , a JPEG compression unit 20 and a mode control unit 21 .
  • Image data output by the image signal processor 314 are input to a CPU 322 .
  • the CPU 322 transmits setting information for the operation mode to the mode control unit 321 in the image signal processor 314 and also supplies the image signal processor 314 with two operation clocks OB and OC.
  • An image memory 323 for temporarily storing image data is provided in the electronic camera 310 .
  • the image signal processor 314 and the CPU 322 access the image memory 323 via their own separate data buses.
  • a monitor 325 for displaying monitor images which is connected to the CPU 322 via a monitor display circuit 24 is provided at the electronic camera 310 .
  • the electronic camera 310 is provided with a card interface 326 connected to the CPU 322 , which is detachably mounted with a memory card 327 , a data terminal 329 through which data are exchanged with an external apparatus, an interface 328 that connects the data terminal 329 to the CPU 322 and an operating member 330 that includes a mode setting button through which various switch outputs are input to the CPU 322 .
  • Information indicating whether or not raw data are required is input through the mode setting button 330 . If raw data are not required, a fast mode, which is to be detailed later, is set, whereas if raw data are required, an original image mode is set. In the fast mode, JPEG compressed data are recorded, and in the original image mode, raw data are output as well as recording the JPEG compressed data.
  • the user selects whether or not raw data are required by operating the mode setting button 330 .
  • the information thus set is transmitted to the mode control unit 321 via the CPU 322 .
  • the mode control unit 321 selects the signal path corresponding to the fast mode for the image signal processor 314 .
  • FIG. 20 schematically illustrates the fast mode signal path.
  • the mode control unit 321 connects the A/D conversion unit 313 , the signal level correction unit 315 , the white balance adjustment unit 316 and the gamma control unit 317 so as to form a pipeline. Then, the mode control unit 321 supplies the operation clock OA from the timing generator 312 to the signal level correction unit 315 , the white balance adjustment unit 316 and the gamma control unit 317 to set these processing units to engage in synchronous operations.
  • the image data When image data are output by the image-capturing device 311 , the image data undergo linear quantization at the A/D conversion unit 313 to be converted to 12-16-bit digitized image data.
  • the digitized image data undergo clamp correction and gain control at the signal level correction unit 315 , then undergo white balance adjustment at the white balance adjustment unit 316 and are sequentially output in units of single lines to the gamma control unit 317 .
  • the gamma control unit 317 performs gamma control on the image data and also outputs data achieved by reducing the number of quantization bits of the image data to approximately 8 bits.
  • the sequence of signal processing operations performed up to this point is identical to that performed by the line processing circuit 100 in FIG. 3 , and is executed in real time in units of individual pixels for each line in synchronization with the operation clock OA supplied by the timing generator 312 .
  • the output from the gamma control unit 317 (non-linear processed data of approximately 8 bits) is temporarily stored in a storage area 323 A in the image memory 323 .
  • the mode control unit 321 allocates a “raw data storage area 323 C,” which is used in the original image mode to be detailed later, as part of the storage area 323 A to increase the storage capacity of the storage area 323 A.
  • the electronic camera 310 is able to start signal processing of the next frame without having to wait for the signal processing to be completed.
  • the color interpolation unit 318 reads out the image data from the storage area 323 A in units of predetermined blocks to execute color interpolation processing on the image data through local pixel calculation and to calculate the three color components, i.e., R, G and B for each of the pixels.
  • the color difference conversion unit 319 sequentially converts the R, G and B components to color difference data constituted of a brightness Y and color differences Cr and Cb.
  • the processing performed by the color interpolation unit 318 and the color difference conversion unit 319 is identical to that performed by the block processing circuit 200 in FIG. 4 and is executed in conformance to the operation clock OB supplied by the CPU 322 .
  • the color difference data (Y, Cb, Cr) resulting from the conversion described above are temporarily stored in a storage area 323 B in the image memory 323 .
  • the monitor display circuit 324 reads out the color difference data (Y, Cb, Cr) in the storage area 323 B via the CPU 322 and displays the captured image on the monitor 325 .
  • the JPEG compression unit 320 reads out the color difference data (Y, Cb, Cr) from the storage area 323 B and executes irreversible image compression (DCT conversion, quantization, coding) in synchronization with the operation clock OB.
  • the image data resulting from the irreversible compression are recorded in the memory card 327 via the CPU 322 and the card interface 326 .
  • the CPU 322 may directly read out the color difference data (Y, Cb, Cr) from the storage area 323 B to record them in the memory card 327 via the card interface 326 .
  • the processing in the fast mode is completed when the operation described above ends.
  • the mode control unit 321 selects the signal path corresponding to the original image mode for the image signal processor 314 .
  • FIG. 21 schematically illustrates the original image mode signal path.
  • the mode control unit 321 sets the signal path so that the output from the white balance adjustment unit 316 is provided to the gamma control unit 317 via the image memory 323 . Then, the mode control unit 321 supplies the signal level correction unit 315 and the white balance adjustment unit 316 with the operation clock OA from the timing generator 312 . In addition, the mode control unit 321 switches the operation clock for the gamma control unit 317 to the operation clock OC, which is faster than the operation clock OA.
  • the image data When image data are output by the image-capturing device 311 , the image data undergo linear quantization at the A/D conversion unit 313 to be converted to 12 ⁇ 16-bit digitized image data.
  • the digitized image data sequentially undergo processing at the signal level correction unit 315 and the white balance adjustment unit 316 , and then are temporarily stored in the storage area 323 C in the image memory 323 as approximately 12 ⁇ 16-bit raw data.
  • the gamma control unit 317 performs gamma control while reading out the raw data from the storage area 323 C in synchronization with the fast operation clock OC and outputs the processed data as non-linear processed data of approximately 8 bits.
  • the approximately 8-bit non-linear processed data are temporarily stored in the storage area 323 A in the image memory 323 .
  • the color interpolation unit 318 reads out the image data from the storage area 323 A in units of predetermined blocks to execute interpolation processing through local pixel calculation, and to calculate the three color components, i.e., R, G and B, for each pixel.
  • the color difference conversion unit 319 sequentially converts the R, G and B components to color difference data, which is constituted of a brightness Y and color differences Cr and Cb.
  • the color difference data (Y, Cb, Cr) resulting from the conversion described above are temporarily stored in the storage area 323 B in the image memory 323 .
  • the JPEG compression unit 20 executes image compression (DCT conversion, quantization, coding) in synchronization with the operation clock OB while reading out the color difference data (Y, Cb, Cr) as necessary from the storage area 323 B.
  • the image data achieved through the irreversible compression are then recorded in the memory card 327 via the CPU 322 and the card interface 326 .
  • the CPU 322 may directly read out the color difference data (Y, Cb, Cr) from the storage area 323 B to record the color difference data in the memory card 327 via the card interface 326 .
  • the raw data remain intact in the storage area 323 C.
  • the CPU 322 reads out the raw data and outputs the raw data thus read out through the data terminal 329 via the interface 328 .
  • the raw data are transferred and stored into an external storage medium or the like connected to the data terminal 329 .
  • the mode control unit 321 dynamically selects the signal path within the image signal processor 314 .
  • the raw data are stored in the storage area 323 C in the original image mode so that they can be utilized later.
  • the storage area 323 C in an idling state is effectively utilized as part of the storage area 323 A in the fast mode.
  • the storage area 323 A with a larger capacity as a retreat area during signal processing so that the photographing enabling intervals in the fast mode (in particular, during continuous photographing) can be greatly reduced.
  • the mode control unit 321 selects the signal path for the original image mode, it also switches the operation clock of the gamma control unit 317 from the operation clock OA to the faster operation clock OC.
  • the length of time required for gamma control processing is also minimized in the original image mode.
  • the CPU 322 may store raw data, either in the original state or in a reversibly compressed state, in a recording medium such as a PC card mounted at the electronic camera.
  • the storage area 323 C in an idling state is effectively utilized as part of the storage area 323 A in the fast mode so that the storage area 323 A can be used as a buffer area for non-linear processed data of approximately 8 bits.
  • the present invention is not limited to this example.
  • the storage area 323 C, which is left in an idling state in the fast mode may be utilized as a retreat area for image data undergoing processing, e.g., compressed image data yet to be recorded in the memory card and image data in the process of being compressed.
  • a photographing operation for the next frame can be started without having to wait for the completion of the image data processing.
  • the output from the white balance adjustment unit 316 is recorded in the area 323 A of the image memory 323 as raw data in the original image mode in the embodiment described above, the present invention is not limited to this example.
  • image data have not undergone any irreversible signal processing such as a reduction of the number of quantization bits, gradation conversion and pixel thinning, they can be used as raw data faithful to the original image. Consequently, an output from the A/D conversion unit 313 , an output from the signal level correction unit 315 , a signal manifesting immediately after black level correction or the like may be recorded in the image memory 323 as raw data, instead.
  • raw data are extracted from the stage preceding the gamma control unit 317 , which performs irreversible gradation conversion in the embodiment
  • the present invention is not limited to this example.
  • raw data may be extracted from a stage preceding the signal processing unit.
  • the data retreat area is substantially enlarged to achieve a further reduction in the photographing enabling intervals in the fast mode.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)

Abstract

A CCD captures a subject image having passed through a taking lens and an image processing circuit performs various types of image pre-treatment including gamma correction and white balance on image data corresponding to n lines×m rows output by the CCD. The image processing circuit also performs format processing on the data. The data are then compressed at a compression circuit. The white balance adjustment and the like are implemented in line sequence at a line processing circuit which engages in signal processing in pixel sequence in units of individual lines in the output from the CCD. The image data having undergone the pre-treatment are then subjected to format processing prior to JPEG compression, at a block processing circuit that engages in signal processing in units of individual blocks each ranging over an n×m (N>n, M>m) block. In other words, the signal processing is performed in block sequence.

Description

  • This is a Continuation of U.S. Ser. No. 11/819,666 filed Jun. 28, 2007, which is a Division of U.S. patent application Ser. No. 09/497,482 filed Feb. 4, 2000 (now U.S. Pat. No. 7,253,836), which is a Continuation-In-Part of U.S. patent application Ser. No. 09/342,512 filed Jun. 29, 1999 (now abandoned). The disclosure of each of the prior applications is hereby incorporated by reference herein in its entirety.
  • INCORPORATION BY REFERENCE
  • The disclosures of the following priority applications are herein incorporated by reference:
    • Japanese Patent Application No. 10-183918, filed Jun. 30, 1998
    • Japanese Patent Application No. 10-183919, filed Jun. 30, 1998
    • Japanese Patent Application No. 10-183920, filed Jun. 30, 1998
    • Japanese Patent Application No. 10-183921, filed Jun. 30, 1998
    • Japanese Patent Application No. 10-237321, filed Aug. 24, 1998
    • Japanese Patent Application No. 11-213299, filed Jul. 28, 1999
    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a digital camera that stores in memory a subject as image data that are electronically compressed, and it also relates to a storage medium that stores an image signal processing program. Furthermore, the present invention relates to a carrier wave that is encoded to transmit a control program for white balance adjustment on image data. It also relates to an electronic camera that allows selection to be made between recording of irreversible image data and recording of raw data.
  • 2. Description of the Related Art
  • Electronic still cameras in the known art include the type provided with a viewfinder device to which a subject image having passed through a taking lens is guided by a quick return mirror, an image-capturing device such as a CCD that is provided at a rearward of the quick return mirror to capture an image of the subject image and output image data, an image processing circuit that performs image processing such as white balance and gamma correction on the image data output by the image-capturing device, a compression circuit that compresses the data which have undergone image processing through a method such as JPEG and stores the data in a storage medium such as a flash memory and a monitor that displays the data having undergone the image processing. In the image processing circuit, parameters such as the R gain and the B gain for white balance adjustment or the gradation curve for gamma correction are calculated based upon a preset algorithm. In addition, the image data are converted to 16×8 sets of brightness data Y and 8×8 sets of Cr and Cb color difference data for JPEG compression.
  • The image-capturing device in such an electronic still camera in the prior art structured as described above presents the following problems.
  • (1) Both the image pre-treatment such as white balance or gamma correction and the image post-treatment, in which the data that have undergone the image pre-treatment are formatted for the JPEG compression, are performed in units of individual lines in correspondence to the read performed at the CCD. Because of this, in a high image quality electronic still camera with the number of pixels at the CCD exceeding two million, the capacity of the line buffer memory employed for pipeline operation and the like, is bound to be very large, resulting in the camera becoming expensive. This problem may be explained as follows.
  • When performing signal processing on the output from a solid image-capturing element, N×M sets of image data corresponding to one screen output by the image-capturing element are output in point sequence in units of individual lines. Thus, when performing signal processing including pixel interpolation processing and filtering processing, a line buffer memory supporting four lines, for instance, is required if the filtering processing is to be performed in sets of 5×5. In other words, the processing can be performed only when image data corresponding to four lines have been accumulated in the memory. Such a line buffer memory supporting four lines is required for each of the various types of processing such as filtering processing and interpolation processing.
  • If a line buffer memory that supports four lines is provided at a 1-chip processing IC for each of the various types of processing required, such as the filtering processing and the interpolation processing described above, the ratio of the area occupied by the memory increases, which leads to an increase in the number of gates at the 1-chip processing IC, resulting in higher cost. In particular, in a high resolution type image-capturing element having more than two million pixels with a large number of pixels per line, the cost will be especially high. In addition, if the line buffer memory is provided outside the 1-chip processing IC, twenty 10-bit input/output pins, for instance, will be required. This means that 20 input/output pins will be necessary for each line buffer memory to result in an increase in the package size of the 1-chip processing IC.
  • (2) In the image-capturing device in an electronic still camera in the prior art, the interpolation processing for an (R-G) signal and a (B-G) signal, matrix processing through which a Y signal, a Cr signal and a Cb signal are generated using the (R-G) signal, the (B-G) signal and the G signal and LPF processing through which low frequency signals are extracted from the Y signal, the Cr signal and the Cb signal is performed in time sequence to format the data for JPEG compression and to suppress false colors and color moire from occurring. As a result, particularly in the case of a high image quality electronic still camera with the number of pixels at the CCD exceeding two million, the length of time of processing is bound to be large, resulting in poor operability.
  • (3) In the image-capturing device in an electronic still camera in the prior art, a single primary color type CCD, two CCDs (one for G and the other for R/B) or three CCDs (one each for R, G and B) are employed. When using a single CCD, since an RGB color filter is provided at the front surface of each pixel at the CCD, an R signal, a G signal or a B signal is missing from a given pixel. Thus, interpolation is performed for pixels without a G signal component by using the G signals of pixels that have been actually obtained to generate G signals for all the pixels, and interpolation in regard to the (R-G) signal and the (B-G) signals is likewise performed. The same principle applies when using two CCDs, as well.
  • However, depending upon the nature of the image that has been captured or the characteristics of the low pass filter employed, false colors or moire may occur after the interpolation processing, which results in a great degree of degradation in the image quality. While the Cr signal and the Cb signal among the Y signal, the Cr signal and the Cb signal described above created using the R, G and B signals are processed through the low pass filter to suppress false colors or moire in the prior art, this means does not achieve satisfactory results in a high image quality electronic still camera having more than two million pixels at the CCD.
  • (4) Since the white balance adjustment is achieved using predetermined white balance adjustment coefficients in the image-capturing device in an electronic still camera in the prior art, there is the likelihood of a color-cast image being generated if the white balance adjustment coefficients are set erroneously. This problem tends to occur more readily in a high image quality electronic still camera with the number of pixels at the CCD exceeding two million.
  • Electronic cameras in which selection between the two different data formats described below can be performed when recording image data obtained through image-capturing have been known in the prior art.
  • (1) Irreversible compressed data obtained through JPEG or the like that have undergone a sequence of image processing
    (2) Raw data output by the image-capturing device
  • The first type of data, i.e., irreversible compressed data are advantageous in that since the code volume is relatively small, a large number of images can be stored in an external recording medium such as a memory card. In addition, they are recorded in a general-purpose format which allows data decoded by using a common image viewing software program or the like to be printed or displayed directly.
  • The second type of data, i.e., raw data are image data faithful to the output signal from the image-capturing device. A data recording format of raw data facilitates external processing. Since raw data which, undergo very little irreversible gradation conversion or data compression, contain a large volume of information such as the number of quantization bits, they have a wide dynamic range as image information. Thus, they provide an advantage in that they can be processed in an ideal manner without the tendency to lose fine gradation components. For this reason, highly advanced data processing and a higher quality are required in this type of raw data. The data format of raw data is particularly suited for printing and design applications.
  • Normally, an electronic camera requires a greater length of time for image processing compared to cameras using a silver halide film. In order to achieve a degree of operability in an electronic camera comparable to that of cameras using a silver halide film, it is crucial to minimize the length of time required for image processing. However, in an electronic camera in the prior art, a raw data read/write operation performed via an image memory is always necessary. This tends to lead to a delay occurring in signal processing performed on irreversible compressed data by a length of time corresponding to the length of time spent on the raw data read/write.
  • In addition, processing circuits that perform relatively complex processing, a prime example of which is pixel value matrix operation, are concentrated in a processing unit at a stage preceding the stage for gamma control operation. Since raw data with a large number of quantization bits are handled in this state at these processing circuits, the circuit structures of the processing circuits tend to be complex and there is also a problem of a greater length of time required for signal processing.
  • SUMMARY OF THE INVENTION
  • A first object of the present invention is to provide a digital camera that does not necessitate any increase the capacity of buffer memory and thus, achieves a reduction in cost even when the number of pixels is great.
  • A second object of the present invention is to provide a storage medium that stores a program for implementing signal processing which achieves a reduction in the required capacity of the buffer memory even when processing image data for which image-capturing has been performed using an image-capturing device having a large number of pixels.
  • A third object of the present invention is to provide a digital camera that achieves a reduction in the length of time required for data formatting or processing through which false colors or moire is prevented even when the number of pixels is large.
  • A fourth object of the present invention is to provide a storage medium that stores a program for implementing signal processing in which data formatting and processing for preventing false colors and moire can be 3-4 performed within a short period of time even when handling image data for which image-capturing has been performed using an image-capturing device with a great number of pixels.
  • A fifth object of the present invention is to provide a digital camera that suppresses the color-cast phenomenon occurring due to an error manifesting following the white balance adjustment performed by an external sensor to a satisfactory degree.
  • A sixth object of the present invention is to provide a storage medium that stores a program for implementing signal processing through which the color-cast phenomenon occurring due to an error manifesting after the white balance adjustment performed by an external sensor can be suppressed to a satisfactory degree.
  • The digital camera according to the present invention comprises an image-capturing device that captures a subject image having passed through a taking lens and outputs image data, a recording processing circuit that performs recording processing on the image data and an image processing circuit that performs a pre-treatment (includes gamma correction and white balance correction) on the image data corresponding to N lines×M rows output by the image-capturing device in units of individual lines in line sequence and then performs format processing (includes interpolation processing, LPF processing, BPF processing and color difference signal calculation processing) that corresponds to the type of recording performed at the recording processing circuit on the image data having undergone the pre-treatment in units of individual blocks corresponding to n lines×m rows (N>n, M>m) in block sequence.
  • The image processing performed in this digital camera may be implemented on a computer. The program stored in a storage medium for this purpose implements signal processing including format processing through which the image data of an image captured at an image-capturing device are formatted for recording, various types of pre-treatment that are implemented prior to the format processing and recording processing through which the image data having undergone format processing are recorded, with signal processing during the pre-treatment performed in units of individual lines in line sequence on image data corresponding to N lines×M rows and signal processing during the format processing performed in units of individual blocks corresponding to n lines×m rows (N>n, M>m) in block sequence on the image data having undergone the pre-treatment.
  • Alternatively, the digital camera according to the present invention may comprise an image-capturing device that captures a subject image having passed through a taking lens and outputs image data, a recording processing circuit that performs recording processing on the image data and an image processing circuit that, with the image data output by the image-capturing device input as data corresponding to n lines×m rows, calculates a color difference signal based upon the image data thus input, performs interpolation processing and low pass filtering processing at once on the color difference signal using filter coefficients for interpolation/low pass filtering and then generates a formatted signal by performing matrix processing corresponding to the type of recording implemented at the recording processing circuit.
  • The image processing performed in this digital camera may be implemented on a computer. The program stored in a storage medium for this purpose executes format processing that formats the image data of an image captured at an image-capturing device for recording, in which color difference signals corresponding to n lines×m rows are calculated based upon the image data that are input, interpolation processing and low pass filtering processing are executed at once on the color difference signals corresponding to n lines×m rows using filter coefficients for interpolation/low pass filtering and then a formatted signal is generated through matrix processing and recording processing through which the image data having undergone the format processing are recorded.
  • Furthermore, in the digital camera according to the present invention, which may comprise an image-capturing device that captures a subject image having passed through a taking lens and outputs image data, an image processing circuit that performs image processing including data format processing appropriate for data compression on the image data output by the image-capturing device and a compression circuit that compresses image data output by the image processing circuit, the image processing circuit engages in median processing on image data corresponding to n×m pixel areas to execute the format processing.
  • The image processing performed in this digital camera may be implemented on a computer. The program stored in a storage medium for this purpose implements signal processing including format processing through which the image data of an image captured at the image-capturing device are formatted for compression, various types of signal processing that are implemented prior to the format processing and compression processing through which image data having undergone the format processing are compressed, with median processing performed on image data corresponding to n×m pixel areas during the format processing.
  • By extracting (n-i)×(m-j) sets of image data among the image data corresponding to the n×m pixel areas and performing median processing on them, the length of time required for the median processing can be reduced.
  • Moreover, the digital camera according to the present invention may comprise an image-capturing device that captures a subject image having passed through a taking lens and outputs image data and an image processing circuit that executes image processing on the image data output by the image-capturing device, in which median processing is implemented on (n-i)×(m-j) sets of image data extracted from image data corresponding to an n×m pixel area.
  • The image processing performed in this digital camera may be implemented on a computer. The program stored in a storage medium for this purpose implements a specific type of image processing on the image data of an image captured at the image-capturing device, in which median processing is executed on (n-i)×(m-j) sets of image data extracted from image data corresponding to an n×m pixel area.
  • The digital camera according to the present invention may comprise an image-capturing device that captures a subject image that passes through a taking lens and outputs image data, a white balance adjustment circuit that performs white balance adjustment on the image data output by the image-capturing device, a white balance fine adjustment coefficient calculation circuit that calculates a white balance fine adjustment coefficients based upon image data having undergone the white balance adjustment, output by the white balance adjustment circuit and a white balance fine adjustment circuit that performs white balance fine adjustment using the white balance fine adjustment coefficients on image data having undergone the white balance adjustment output by the white balance adjustment circuit.
  • The image processing implemented in this digital camera may be executed by a computer. The program stored in a storage medium for this purpose implements white balance adjustment processing in which white balance adjustment is performed on the image data of an image-captured at an image-capturing device, white balance fine adjustment coefficient calculation processing in which white balance fine adjustment coefficients are calculated using image data having undergone the white balance adjustment through the white balance adjustment processing and white balance fine adjustment processing in which white balance fine adjustment is performed using the white balance fine adjustment coefficients on the image data having undergone the white balance adjustment.
  • The white balance fine adjustment coefficients are calculated based upon the average values calculated for the R, B and G signals in the image data having undergone the white balance adjustment. Alternatively, it may be calculated based upon the histograms of the brightness levels calculated for the R, B and G signals of the image data having undergone the white balance adjustment.
  • In addition, the digital camera according to the present invention may comprise an image-capturing device that captures a subject image that passes through a taking lens and outputs image data, a white balance adjustment circuit that performs white balance adjustment on the image data output by the image-capturing device, an image area selection apparatus that selects one image area among a preset plurality of image areas, a white balance fine adjustment coefficient calculation circuit that calculates white balance fine adjustment coefficients using image data within an area set in relation with the one image area selected by the image area selection apparatus, among the image data having undergone the white balance adjustment output by the white balance adjustment circuit, and a white balance fine adjustment circuit that performs white balance fine adjustment using the white balance fine adjustment coefficients calculated at the white balance fine adjustment coefficient calculation circuit.
  • If the digital camera is provided with a focal point detection device that detects the state of focal adjustment relative to the subject for each of a preset plurality of focal point detection areas and a focal point detection area selection apparatus that selects one of the plurality of focal point detection areas based upon focal adjustment statuses, the white balance fine adjustment coefficients are calculated by selecting image data in an image area related to the focal point detection area selected by the focal point detection area selection apparatus.
  • The image processing performed in the digital camera may be executed on a computer. The program stored in a storage medium for this purpose implements white balance adjustment processing in which white balance adjustment is performed on an image captured at the image-capturing device, image area selection processing in which one of a preset plurality of image areas is selected, white balance fine adjustment coefficient calculation processing in which white balance fine adjustment coefficients are calculated using image data within an area set in relation to the one image area selected through the image area selection processing from image data having undergone white balance adjustment through the white balance adjustment processing and white balance fine adjustment processing in which white balance fine adjustment is performed on the image data having undergone the white balance adjustment using the white balance fine adjustment coefficients.
  • Another object of the present invention is to provide a carrier wave encoded to transmit a control program for white balance adjustment on image data.
  • In a carrier wave encoded to transmit a control program for white balance adjustment on image data, the control program includes instructions for; white balance adjustment processing in which white balance adjustment is performed on image data of an image captured at an image-capturing device; image area selection processing in which an image area is selected from a preset plurality of image areas; white balance fine adjustment coefficient calculation processing in which white balance fine adjustment coefficients are calculated using image data within an area set in relation to the image area selected through the image area selection processing; and white balance fine adjustment processing in which white balance fine adjustment is performed on image data having undergone white balance adjustment using white balance fine adjustment coefficients.
  • Another object of the present invention is to provide an electronic camera that is capable of reducing the length of signal processing time while allowing selection to be made between recording irreversible image data and recording raw data.
  • The electronic camera according to the present invention comprises an image-capturing device, a first signal processing unit that performs, at least, A/D conversion on an image signal generated by the image-capturing device to convert the signal to digital image data, a second signal processing unit that performs irreversible signal processing on the image data resulting from a conversion performed at the first signal processing unit, an image memory capable of temporarily storing the image data and an operation control unit that dynamically selects signal paths between the two signal processing units in correspondence to the operating mode set to (1) or (2) below.
  • (1) A fast mode, in which a sequence of signal processing is continuously executed by providing an output from the first signal processing unit to the second signal processing unit and causing the two signal processing units to engage in synchronous operation.
    (2) An original image mode, in which an output from the first signal processing unit is stored in the image memory, image data read out from the image memory are provided to the second signal processing unit and the two signal processing unit are each made to operate with their own timing.
  • The “storage area in the image memory provided to store the output from the first signal processing unit in the original image mode” can be utilized by the operation control unit as a buffer area where image data undergoing processing are kept in retreat in the fast mode.
  • In addition, the operation control unit accepts an external operation indicating whether or not raw data, i.e., image data before undergoing irreversible signal processing at the second signal processing unit, are required, selects and executes the fast mode if the external operation indicates that no raw data are required, and selects and executes the original image mode if the external operation indicates that raw data are required to output the raw data present in the image memory to the outside or to store the raw data present in the image memory at a recording medium.
  • It is desirable that in the original image mode, the operation clock at the second signal processing unit be set faster than the operation clock at the first signal processing unit by the operation control unit.
  • The second signal processing unit may be a unit that engages in, at least, either “irreversible gradation conversion” or “irreversible pixel thinning.”
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates the structure of an embodiment of a single lens reflex electronic still camera;
  • FIG. 2 is a block diagram of an embodiment of the signal processing system in the single lens reflex electronic still camera;
  • FIG. 3 is a block diagram illustrating the circuit that performs line processing in the signal processing system shown in FIG. 2;
  • FIG. 4 is a block diagram illustrating the circuit that performs block processing in the signal processing system shown in FIG. 2;
  • FIG. 5 illustrates the color filter array;
  • FIG. 6 shows an example of focal point detection area positional arrangement;
  • FIG. 7 illustrates the focal point detection device;
  • FIGS. 8A-8C illustrate histograms of R, G and B brightness;
  • FIG. 9 illustrates the details of processing performed at the G interpolation circuit;
  • FIG. 10 illustrates the details of the processing performed at the band-pass filter;
  • FIG. 11 illustrates the details of the processing performed at the low pass filter;
  • FIG. 12 illustrates the details of the processing performed at the color difference signal generation circuit;
  • FIG. 13 illustrates an example of data processed at the interpolation/LPF circuit;
  • FIG. 14 illustrates the details of the processing performed at the interpolation/LPF circuit;
  • FIG. 15 illustrates the details of the processing performed at the median circuit;
  • FIG. 16 is a flowchart of a program started up by the half-press switch;
  • FIG. 17 is a block diagram of the JPEG format processing achieved through line processing instead of block processing; and
  • FIG. 18 is a block diagram of a configuration that allows image processing to be performed by taking raw image data into a personal computer.
  • FIG. 19 is a block diagram illustrating the structure of the electronic camera in another embodiment of the present invention;
  • FIG. 20 illustrates the signal path through which signals travel when the electronic camera in FIG. 19 is set in the fast mode; and
  • FIG. 21 illustrates the signal path through which signals travel when the electronic camera in FIG. 19 is set in the original image mode.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The following is an explanation of an embodiment of the present invention given in reference to the drawings. As illustrated in FIG. 1, the single lens reflex electronic still camera in this embodiment is provided with a camera main body 70, a viewfinder device 80 which is attached to or detached from the camera main body 70 and an interchangeable lens 90 internally provided with a taking lens 91 and an aperture 92, that is attached to or detached from the camera main body 70. Subject light passes through the interchangeable lens 90 to enter the camera main body 70 and is guided to the viewfinder device 80 by a quick return mirror 71 which is at the position indicated by the dotted line before a release to form an image at a viewfinder mat 81 and also to form an image at a focal point detection device 36. The subject image is further guided to an ocular lens 83 by a pentaprism 82. After a release, the quick return mirror 71 rotates to the position indicated by the solid line and the subject light forms an image on an image-capturing device 71 via a shutter 72. Prior to the release, the subject image enters a white balance sensor 86 through a prism 84 and an image-forming lens 85 so that the color temperature of the subject image is detected.
  • FIG. 2 is a circuit block diagram of the embodiment. A half-press signal and a full-press signal from a half-press switch 22 and a full-press switch 23 respectively both interlocking with a release button are input to a CPU 21. When the half-press switch 22 is operated and the half-press signal is input, the focal point detection device 36 detects the focal adjustment state at the taking lens 91 in response to a command issued by the CPU 21 and drives the taking lens 91 to the focus matching position so that the subject light entering the interchangeable lens 90 forms an image on the image-capturing device 73. As is to be explained in detail later, the focal point detection device 36 detects the state of focal adjustment for a focal point detection area at the center of the photographic image plane and each of the four focal point detection areas set to the left, to the right, above and below the central focal point detection area, and drives the taking lens 91 to the focus matching position based upon the focal adjustment status detected from the focal point detection area that has been selected based upon a preset algorithm. In addition, the drive of a CCD 26 of the image-capturing device 73 is controlled via a timing generator 24 and a driver 25. The timing generator 24 controls the operating timing of an analog processing circuit 27 and an A/D conversion circuit 28. Furthermore, a white balance detection processing circuit 35 starts driving in response to a signal provided by the CPU 21.
  • When the full-press switch 23 is turned on immediately after the half-press switch 22 is turned on, the quick return mirror 71 rotates upward, the subject light from the interchangeable lens 90 forms an image on the photosensitive surface of the CCD 26 and the signal charge that corresponds to the brightness of the subject image is stored at the CCD 26. The signal charge thus stored at the CCD 26 is caused to be swept out by the driver 25 and is input to the analog signal processing circuit 27 that includes an AGC circuit and a CDS circuit. After analog processing such as gain control and noise removal is performed on an analog image signal at the analog signal processing circuit 27, the signal is converted to a digital signal at the A/D conversion circuit 28. The signal achieved through the digital conversion is supplied to an image processing circuit 29 which may be constituted as, for instance, an ASIC, where the signal undergoes an image pre-treatment including white balance adjustment, profile compensation and gamma control.
  • The white balance detection processing circuit 35 includes a white balance sensor 35A (the white balance sensor 86 in FIG. 1) constituted as a color temperature sensor, an A/D conversion circuit 35B which converts the analog signal output by the white balance sensor 35A to a digital signal and a CPU 35C that generates a white balance adjustment signal based upon a digital color temperature signal. The white balance sensor 35A, which may be constituted of, for instance, a plurality of photoelectric conversion devices each demonstrating sensitivity to red color R, blue color B or green color G, receives the optical image of the entire photographic field. For instance, when the white balance sensor is constituted of a two-dimensional CCD provided over 24 rows×20 lines, the photosensitive area of the CCD may be divided into 16 areas with a plurality of elements demonstrating sensitivity to R, G and B arrayed in each area. The CPU 35C calculates the R gain and the B gain based upon the outputs from the plurality of photoelectric conversion devices. These gains are transferred to a specific register at the CPU 21 and are stored there.
  • The image data that have undergone the image pre-treatment further undergo format processing (image post-treatment) for JPEG compression and then the image data are temporarily stored in a buffer memory 30.
  • The image data stored in the buffer memory 30 are processed into display image data at a display image generation circuit 31 and are displayed on an external monitor 32 such as an LCD as the results of photographing. In addition, the image data stored in the buffer memory 30 undergo data compression at a specific rate through the JPEG method at a compression circuit 33 and are recorded in a storage medium (PC card) 34 such as a flash memory.
  • FIGS. 3 and 4 are block diagrams illustrating the details of the image processing circuit 29. FIG. 3 shows a line processing circuit 100 that performs signal processing on the image data provided by the CCD 26 in units of individual lines, which undertakes the image pre-treatment described above. FIG. 4 illustrates a block processing circuit 200 that performs signal processing on image data having undergone the signal processing at the line processing circuit 100, in units of blocks corresponding to 20×20 pixel areas, 16×16 pixel areas, 12×12 pixel areas or 8×8 pixel areas, which undertakes the image post-treatment described above. It is to be noted that while the image processing circuit 29 is actually realized in software by employing a plurality of processors, it is explained as hardware in this specification to facilitate explanation.
  • The line processing circuit 100 in FIG. 3 performs various types of signal processing that are to be detailed later on 12-bit R, G and B signals output by the A/D conversion circuit 28 and is provided with a defect correction circuit 101, a digital clamp circuit 102, a gain circuit 103, a white balance circuit 104, a black level circuit 105, a gamma correction circuit 106 and an average value/histogram calculation circuit 107.
  • The defect correction circuit 101 corrects the data output from a pixel with a defect (specified in advance with its address set in a register) in the output of the CCD 26 in units of individual lines in pixel sequence. The digital clamp circuit 102 subtracts the weighted average of a plurality of sets of pixel data that are used as so-called optical black from each set of pixel data in a given line in the output from the CCD 26 in units of individual lines in pixel sequence. The gain circuit 103 uniformly applies a specific gain to each of the R, G and B signals output by the CCD 26 in units of individual lines in pixel sequence, implements inconsistency correction with regard to the sensitivity of the CCD 26 for the G signal and also implements inconsistency correction with regard to the sensitivity ratio of the CCD 26 for the R and B signals.
  • The white balance circuit 104 multiplies the R signal and the B signal in the output from the CCD 26 by the R gain and the B gain which constitute the white balance adjustment coefficients set in advance and stored in the register at the CPU 21 as explained earlier, in units of individual lines in pixel sequence. According to the present invention, as is to be explained later, gain for white balance fine adjustment is calculated based upon the image data corrected at the white balance circuit 104 to perform fine adjustment of the white balance. The black level circuit 105 adds a value set in advance and stored in the register at the CPU 21 to the R, G and B signals in the output from the CCD 26 in units of individual lines in pixel sequence. The gamma correction circuit 106 performs gamma correction on the output from the CCD 26 in units of individual lines in pixel sequence by using a gradation look-up table. It is to be noted that through the gamma correction, the 12-bit R, G and B signals are each converted to 8-bit data.
  • The average value/histogram calculation circuit 107 extracts image data corresponding to, for instance, 512×512 areas with the area at its center selected as a focal point detection area among the image data corresponding to the entire area that have undergone the gamma correction and calculates a gain RF-gain for white balance fine adjustment of the R signal and a gain BF-gain for white balance fine adjustment of the B signal using, for instance, the following formulae (1) and (2). The gains RF-gain and BF-gain are stored in the register. For instance, if color filters are provided on the 512×512 pixel area as illustrated in FIG. 5, the average values of the R signal, the G signal and the B signal may be calculated using formulae (3)˜(5), to calculate the gains RF-gain and BF-gain for white balance fine adjustment using the ratio of the G signal average value Gave and the R signal average value Rave and the ratio of the G signal average value Gave and the B signal average value Bave as indicated in formulae (1) and (2)

  • RF-gain=Gave/Rave  (1)

  • BF-gain=Gave/Bave  (2)

  • Here, Rave=Rsum/number of R pixels  (3)

  • Gave=Gsum/number of G pixels  (4)

  • Bave=Bsum/number of B pixels  (5)
  • By adopting this averaging method, the gradation average values of the R signals, the G signals and the B signals in the image data are determined, which has proved through experience to improve the results of white balance adjustment (the overall white balance).
  • FIG. 6 illustrates an example of a positional arrangement of focal point detection areas. In this embodiment, an area AC located at the center of the image-capturing image plane, an area AR to the right viewed by the photographer, an area AL to the left, an area AU on the upper side and an area AD on the lower side are provided. One of these areas is selected based upon a preset algorithm and image data corresponding to 512×512 areas with the selected area located at the center are extracted. Based upon the extracted image data, the gain RF-gain for white balance fine adjustment of the R signal and the gain BF-gain for white balance fine adjustment of the B signal are calculated as described earlier.
  • In reference to FIG. 7, the structure of the focal point detection device 36 and the principle of the focal point detection operation performed by the focal point detection device 36 are explained. The focal point detection device 36 comprises an infrared light blocking filter 700, a visual field mask 900, a field lens 300, an opening mask 400, image reforming lenses 501 and 502, an image sensor 310 and the like. An area 800 is the exit pupil of a taking lens 91 (see FIG. 1). In addition, in areas 801 and 802, the image achieved by reverse-projecting the opening portions 401 and 402 bored at the opening mask 400 on the area 800 by using the field lens 300, are present. It is to be noted that in FIG. 7, the infrared light blocking filter 700 may be located either on the right side or the left side of the field mask 900. The light fluxes entering via the areas 801 and 802 form a focal point on an image-capturing device equalizing surface 600, then travel through the infrared light blocking filter 700, the field mask 900, the field lens 300, the opening portions 401 and 402 and the image reforming lenses 501 and 502 and form an image on image sensor arrays 310 a and 310 b.
  • The pair of subject images formed on the image sensor arrays 310 a and 310 b move close to each other in a so-called front pin state in which a sharp image of the subject is formed by the taking lens 91 further frontward (toward the subject) relative to the image-capturing device equalizing surface 600, whereas they move further away from each other in a so-called rear pin state in which a sharp image of the subject is formed further rearward relative to the image-capturing device equalizing surface 600. In addition, when the subject images formed on the image sensor arrays 310 a and 310 b are away from each other by a specific distance, a sharp image of the subject is located on the image-capturing device equalizing surface 600. Thus, the focal adjustment status at the taking lens 91 can be calculated by converting the pair of subject images to electrical signals through photoelectric conversion performed at the image sensor arrays 310 a and 310 b and determining the relative distance between the pair of subject images through arithmetic processing on the signals. This focal adjustment status is calculated as the quantity of misalignment that indicates the direction in which and the distance over which the position of a sharp image formed by the taking lens 91 is located relative to the image-capturing device equalizing surface 600. In FIG. 7, the area in which the images on the image sensor arrays 310 a and 310 b which are projected in reverse by the image reforming lenses 501 and 502 overlap each other in the vicinity of the image-capturing device equalizing surface 600 corresponds to a focal point detection area. Through this method, the focal point is detected for each of the five areas within the photographic image plane.
  • The focal point detection device 36 makes a decision as to which selection area is to be selected for acquisition of focal point information during the actual image-capturing operation after focal points have been detected for the individual areas as explained above. For instance, the area in which the subject closest to the camera is captured may be selected among the plurality of areas. Then, the focal point detection data are utilized in a focal matching operation while image-capturing is in progress. In addition, the image data corresponding to 512×512 areas with the selected focal point detection area located at the center are extracted from the output signal from the white balance sensor 35A. Based upon the image data thus extracted, the gain RF-gain for white balance fine adjustment of the R signal and the BF-gain for white balance fine adjustment of the B signal are calculated.
  • The gains RF-gain and BF-gain for white balance fine adjustment may be calculated as described below, based upon histograms of the brightness levels of the R, G and B signals calculated at the average value/histogram calculation circuit 107. The average value/histogram calculation circuit 107 calculates histograms of the brightness levels of the R, G and B signals. In other words, it obtains histograms as illustrated in FIGS. 8A-8C by calculating the quantities corresponding to individual brightness levels for the various colors. In this process, assuming that the 95% level values of the individual colors, i.e., R, G and B, are set, for instance, at R=180, B=200 and G=190, the RF-gain and BF-gain can be calculated as the white balance fine adjustment, RF-gain=190/180 and the white balance fine adjustment BF-gain=190/200. It is to be noted that a 95% level value is a brightness level value corresponding to the number of dots or pixels that is 95% of the entire number of G-signal dots.
  • By adopting this histogram method, histograms that contain dispersions of gradation distribution for the individual R, G and B signals in the image data are achieved, and by determining a white balance fine adjustment gain based upon the shape of the corresponding histogram, the white balance can be adjusted in a specific concentrated area (the area indicated by the white points) which is proved through experience to improve the results of white balance adjustment. It is to be noted that the averaging method and the histogram method may be used in combination.
  • The block processing circuit 200 in FIG. 4, which is constituted of a white balance fine adjustment circuit 210 and an interpolation/profile processing circuit 220, engages in various types of signal processing in units of n×m sets of pixel data, i.e., in blocks. The white balance fine adjustment circuit 210 performs white balance fine adjustment on R signals and B signals that have undergone the processing performed at the gamma correction circuit 106 and are stored in the buffer memory 30, by multiplying the R and B signals in each 20×20 pixel area block with the gains RF-gain and BF-gain for white balance fine adjustment calculated at the average value circuit 107.
  • The interpolation/profile processing circuit 220 is provided with a G interpolation circuit 221, a band pass filter (BPF) 222, a clip circuit 223, a gain circuit 224, a low pass filter (LPF) 225, a color difference signal generation circuit 226, an interpolation/low pass filter (LPF) circuit 228, a matrix circuit 229, an adder 230 and a median circuit 232. The interpolation/profile processing circuit 220 performs format processing for JPEG data compression for individual data blocks corresponding to 20×20 pixel areas in the image data having undergone white balance fine adjustment to generate Y signals corresponding to 8×8 pixel areas and Cb signals and Cr signals each corresponding to 8×8 pixel areas. A brightness signal Y contains a brightness signal Y1 indicating the low frequency component of the G signal and a profile extraction signal Y2 corresponding to the high frequency component of the G signal, as will be explained later.
  • Block signals corresponding to 20×20 pixel areas output from the white balance adjustment circuit 210 are input to the G interpolation circuit 221 where the G component of each pixel area corresponding to an R signal or a B signal is calculated through an interpolation operation for the data corresponding to the central 16×16 pixel area. Namely, as illustrated in FIG. 9, the G component at the vacant lattice point (the pixel at line 3, row 3, where a B signal is obtained) at the center of D51 representing a 5×5 pixel area data block (line 1, row 1˜line 5, row 5) is calculated for input data D20 corresponding to 20×20 pixel areas. This value is used as a substitute for the G component of the pixel (encircled B) at line 3, row 3 in output data D16 corresponding to 16×16 pixel areas.
  • Next, the G component at the vacant lattice point (the pixel at line 4, row 4, where an R signal is obtained) at the center of data D52 representing a 5×5 pixel area block (line 2, row 2˜line 6, row 6) is used as a substitute for the input data D20 corresponding to the 20×20 pixel areas, and this value is converted as the G component of the pixel (encircled R) at line 4, row 4 in the output data D16 corresponding to 16×16 pixel areas. By performing this processing repeatedly, G interpolation processing is implemented for all the vacant lattice points in the 16×16 pixel area so that the output data D16 are obtained. Then, while output data D12 corresponding to 12×12 pixel areas in the output data D16, are output to the band pass filter 222 and the low pass filter 225, the output data D16 corresponding to the 16×16 pixel area are output to the color difference signal generation circuit 226.
  • The band pass filter 222 extracts the intermediate frequency component (a frequency component that is high enough to allow extraction of the subject profile and is referred to as the high frequency component for convenience) in the G signal in the 12×12 pixel area block output by the G interpolation circuit 221. Namely, as illustrated in FIG. 10, BPF output data are obtained by multiplying data corresponding to a 5×5 pixel area D5 (line 5, row 5˜line 9, row 9) with band pass filter coefficients in input data D12 corresponding to the 12×12 pixel areas, and the value of the BPF output data is used as a substitute for data (bold letter G) at line 7, row 7 in the output data D8 corresponding to an 8×8 pixel area block. By repeating this processing, all the pixel data in the 8×8 pixel area block are converted to G data that have undergone BPF, to generate output data D8.
  • The clip circuit 223 clips and cuts each set of data D8 corresponding to an 8×8 pixel area block output by the band pass filter 222 at a preset level. The gain circuit 224 multiplies the output from the clip circuit 223 with a preset gain.
  • The low pass filter 225 extracts the low frequency component in the G signals in the 12×12 pixel area block output by the G interpolation circuit 221. Namely, as illustrated in FIG. 11, LPF output data are obtained by multiplying the 5×5 pixel area data D5 (line 5, row 5˜line 9, row 9) in the input data D12 corresponding to the 12×12 pixel areas with a low pass filter coefficients, and the value of the LPF output data is substituted for data at line 7, row 7 (hatched area) in the output data D8 corresponding to the 8×8 pixel area block. By repeating this processing, all the pixel data corresponding to the 8×8 pixel area block are used as a substitute for the G data that have undergone LPF, to generate output data D8.
  • As illustrated in FIG. 12, the color difference signal generation circuit 226 generates intermediate data D16-3 that contain (B-G) signals and (R-G) signals based upon RGB signal input data D16-1 corresponding to a 16×16 pixel area block, which are the output from the white balance fine adjustment circuit 210 and G signal input data D16-2 corresponding to the 16×16 pixel area block, which are the output from the G interpolation circuit 221. In addition, it separates the intermediate data D16-3 into (B-G) color difference signal output data D16-4 and (R-G) color difference signal output data D16-5.
  • 8-bit (B-G) signals and (R-G) signals corresponding to 16×16 pixel areas are input to the interpolation/LPF circuit 228 from the color difference signal generation circuit 226 to enable interpolation calculation at the interpolation/LPF circuit to obtain (B-G) signals and (R-G) signals in units of 5×5 pixel area blocks, the interpolation/LPF circuit also performs low pass filtering processing to extract a low band signal and outputs the resulting (B-G) signals and (R-G) signals corresponding to the 12×12 pixel areas to Cb, Cr matrix portions of the matrix circuit 229. In addition, it outputs (B-G) signals and (R-G) signals corresponding to 8×8 pixel areas to a Y matrix portion of the matrix circuit 229.
  • When the (R-G) data corresponding to a 5×5 pixel area block are as presented in FIG. 13, the interpolation calculation and the low pass filtering processing calculation described above are performed as expressed in the following formula (6).

  • Interp R-G(i,j)=
  • Here kc1˜kc9, and Ktr-g each represents a coefficient In general, when interpolation filtering and band-restriction LPF are implemented at the same time, the following restriction is imposed in regard to the filter coefficient. The explanation here is given in one-dimensional terms for purposes of simplification. Let us now consider a situation in which an actual sample point is present in N cycles among interpolated sample points, e.g., a, a, b, b, a, a, b, b, (a represents an actual sample point and b represents a sample point to be interpolated. In this example, an actual sample point is present in four cycles.) When the sample points are to be interpolated using an odd-number degree symmetrical digital filter of degree (2n+1) ((2n+1) is larger than N), since the sample points after the interpolation, too, must be uniform if the actual sample points are uniform, the following restrictions in regard to the filter coefficients are applied.
  • With C(k) representing the kth filter coefficient, the sums of coefficients in N sets of coefficients must be equal to one another, as expressed below.
  • 2 f ° C . ( N × i ) = f ° [ C ( N × i + 1 ) + C ( N × i + N - 1 ) ] ? ? ? = f ° [ C ( N × i + k ) + C ( N × i + N - k ) ]
  • Here, i represents an integer equal to or greater than 0 which contains the filter coefficient equal to or less than 2n+1 and k represents an integer smaller than n and equal to or greater than 0.
  • In the case of two-dimensional processing, filters to which similar restrictions applied may be multiplied together in the horizontal direction and the vertical direction to constitute a two-dimensional filter. Since sample points are interpolated over 2-pixel cycles, as illustrated in FIGS. 5 and 13 in this embodiment, N=2 and the sum of filter coefficients in an even-number degree and the sum of filter coefficients in an odd-number degree must be equal to each other. Namely,

  • f° C.(2×i)=f° C.(2×i+1)
  • When a degreedegree 5 symmetrical type filter as expressed in formula (6) above is employed in two-dimensional processing,

  • kc1+2×kc3+4×kc5+2×kc7+kc9=4×kc2+4×kc4+2×kc6+2×kc8
  • Now, an explanation is given on interpolation/LPF processing on (R-G) signals as an example in reference to FIG. 14. For the (R-G) signals in input data D16 corresponding to 16×16 pixel areas, (R-G) data D5 corresponding to a 5×5 pixel area block (line 3, row 3˜line 7, row 7) are multiplied with interpolation/LPF filter coefficients to calculate (R-G) data representing the central area (at line 5, row 5), and these (R-G) data are used as a substitute for data in output data D12 corresponding to a 12×12 pixel area block. By performing this processing repeatedly, the interpolation/LPF processing is performed on all the pixel data corresponding to the 12×12 pixel area block as far as the (R-G) signals are concerned so that output data D12 are obtained. Similar processing is performed for the (B-G) signals, as well, to generate output data corresponding to the 12×12 pixel area block.
  • The matrix circuit 229 is constituted of the Y matrix portion, the Cb matrix portion and the Cr matrix portion. The Y matrix portion, to which (B-G) signals and (R-G) signals corresponding to the 8×8 pixel area block are input from the interpolation/LPF circuit 228 and G signals corresponding to the 8×8 pixel area block are input from the low pass filter 225, generates brightness signals Y1 each having a low frequency component corresponding to an 8×8 pixel area through the following formula (7).

  • Y1(i,j)=[Mkg×G(i,j)+MkrR-G(i,j)+MkbB-G(i,j)]  (7)
  • Here, Mkg, Mkr1 and Mkb1 each represents a matrix coefficient.
  • The Cb matrix portion and the Cr matrix portion, to which (B-G) signals and (R-G) signals corresponding to the 12×12 pixel area block are respectively input from the interpolation/LPF circuit 228, generate Cb signals and Cr signals corresponding to the 12×12 pixel area block through the following formulae (8) and (9).

  • Cr (i,j)=MkrR-G(i,j)+MkbB-G(i,j)  (8)

  • Cb (i,j)=MkrR-G(i,j)+MkbB-G(i,j)  (9)
  • Here, Mkr2, Mkr3, Mkb2 and Mkb3 each represents a matrix coefficient.
  • The adder 230 adds together the brightness signal Y1 with the low frequency component corresponding to one of the 8×8 pixel areas output by the matrix circuit 229 and a profile extraction signal Y2 with the high frequency component corresponding to the 8×8 pixel areas output by the gain circuit 224. The profile extraction signal Y2 output by the gain circuit 224 is obtained by extracting only the high-frequency component in the G signal in a 16×16 pixel area having undergone the G interpolation, i.e., by extracting the profile. As a result, by adding the brightness signal Y1 calculated through the formula (7) and the profile extraction signal Y2 calculated at the gain circuit 224, at the adder 230, the brightness/profile extraction signals Y (Y1+Y2) for the entire image are calculated. The results of the addition are stored in the buffer memory 30.
  • The median circuit 233, to which Cb signals and Cr signals corresponding to 12×12 pixel areas output by the matrix circuit 229 are input, engages in median processing which is performed by using 9 points, i.e., 3×3 pixels contained in the 5×5 pixel area block to output Cr signals and Cb signals corresponding to 8×8 pixels.
  • In the median processing in this embodiment, median filtering processing is performed on 9 sets of data (indicated by X) corresponding to 3×3 pixels and contained in data D3-5 corresponding to the 5×5 pixel areas (line 5, row 5˜line 9, row 9) in the 12×12 pixel data D12 (indicated by the black dots) as illustrated in FIG. 15. Namely, the 9 sets of data are sorted in ascending order or descending order and the central value is used as median processing data. Then, the median processing data thus obtained are used as a substitute for data corresponding to line 7, row 7 in the output data D8 corresponding to 8×8 pixels. By performing this arithmetic operation repeatedly, output data D8 corresponding to the 8×8 pixels are generated for both the Cb signals and the Cr signals. The output data D8 with the Cb signals and the Cr signals are stored in the buffer memory 30.
  • The JPEG compression circuit 33 repeats the process in which a single unit of YCrCb signals formatted to correspond to the 8×8 pixels to facilitate the JPEG compression method based upon the Y signals corresponding to 8×8 pixels generated by the adder circuit 230 and the Cr signals and the Cb signals corresponding to the 8×8 pixels generated by the median circuit 232 is extracted from input data corresponding to each 20×20 pixel area block input to the block processing circuit 20 and the extracted data are compressed through the procedure in the known art, until the entire image is compressed. The compressed image data are stored in the PC card 34 via the CPU 21.
  • The operation of the electronic still camera structured as described above is now explained. When the half-press switch 22 is operated, the focal point detection device 36 detects the focal adjustment status for each focal point detection area in step S20A. If it is decided in step S20B that the full-press switch 23 has been operated, the quick return mirror swings upward, and the program that implements the photographing sequence in FIG. 16 is executed. In step S21, each pixel at the CCD 26 stores a light-reception signal and when the storage is completed, the electrical charges stored at all the pixels are sequentially read out. In step S22, the image data that have been read out undergo the processing performed at the analog signal processing circuit 27 and then are converted to digital image data at the A/D conversion circuit 28 to be input to the image processing circuit 29. Then, the operation proceeds to step S23 in which processing such as white balance adjustment, gamma gradation control and JPEG formatting processing is performed at the image processing circuit 29. When the image processing is completed, the operation proceeds to step S24 to temporarily store the image data having undergone the image processing in the buffer memory 30. In step S25, the image data are read from the buffer memory 30 and the data are compressed at the JPEG compression circuit 33. In step S26, the compressed image data are stored in the PC card 34.
  • The functions and advantages achieved through this embodiment are explained in further detail.
  • (1) The line processing circuit 100 illustrated in FIG. 3 performs the signal processing that can be implemented in units of individual pixels and in units of individual lines. Namely, the line processing circuit 100 performs output processing for data in pixel sequence in units of lines in correspondence to the data output by the CCD 26. Then, the data having undergone the line processing are temporarily stored in the buffer memory 30, and the subsequent signal processing is performed at the block processing circuit 200 in units of individual n×m (n,m=20, 16, 12, 18) pixel blocks. Thus, the line buffer does not need to become large even in a high image quality electronic still camera having more than 2 million pixels. In other words, unlike in this embodiment, if signal processing is not performed in units of blocks, buffer memories BM1˜BM4 each corresponding to four lines, will be required for the G interpolation processing, the BPF processing, the interpolation/LPF processing and the median processing circuit, as illustrated in FIG. 17, which will obviously result in an increase in the circuit scale. When the processing is realized in hardware, a reduction in size and cost are realized. Since line processing instead of block processing is implemented to perform pipeline arithmetic operation which is executed in units of individual pixels and in units of individual lines, the pipeline arithmetic operation can be performed quickly, as in the prior art.
    (2) Since RF-gain for white balance fine adjustment and BF-gain for white balance fine adjustment are calculated as expressed in formulae (1) and (2) based upon an image having undergone white balance adjustment performed using predetermined white balance adjustment coefficients R gains and B gains and white balance fine adjustment is performed on image data having undergone white balance adjustment using the RF-gains and BF-gains, the occurrence of color-casted image can be prevented even if defective adjustment of the predetermined white balance adjustment coefficients occurs.
    (3) Since the white balance fine adjustment coefficients are calculated using an image data in one area selected from the preset plurality of focal point detection areas in each of which each subject presents, white balance fine adjustment for the main subject is enabled. In addition, even if defective white balance adjustment occurs due to an aberration of the lens at the periphery of the photographic image plane, the white balance fine adjustment can be implemented based upon image data containing the area so that any occurrence of color-casted image can be prevented.
    (4) Since the interpolation/LPF circuit 228 performs interpolation calculation for (B-G) signals and (R-G) signals and also performs low pass filtering processing to extract the low frequency components at the same time, the length of time required for the processing is reduced compared to a method in which signals are processed in the order of the interpolation processing, the matrix processing and the LPF processing to suppress false colors and color moire. In addition, it is possible to do without hardware and, since the total frequency response can be controlled in one processing, ease of control is achieved.
    (5) Since the median processing is performed on the Cr image data and the Cb image data corresponding to 8×8 pixels before performing JPEG compression, false colors and color moire can be suppressed more effectively within a shorter period of time compared to a method in the prior art in which false colors and color moire are suppressed entirely through low pass filtering. In addition, when generating Cr and Cb signals corresponding to 8×8 pixels through the JPEG compression format processing, median processing is performed on nine sets of data corresponding to 3×3 pixels at alternate positions in the horizontal direction and the vertical direction extracted from the 5×5 pixel area block for both the Cb signals and Cr signals, in the 12×12 pixel data having undergone the interpolation/LPF processing and the matrix processing. Therefore, the length of time required for the median processing can be reduced compared to the length of time required when performing median processing on all the 25 sets of data corresponding to 5×5 pixels.
  • While the explanation has been given on an electronic still camera in reference to the embodiments above, the line processing circuit 100 or the block processing circuit 200 may be realized in the form of software by storing an image processing program in a storage medium such as a CD ROM or a floppy disk which can be utilized when performing image processing on a personal computer. In this case, image data that have undergone image-capture at the CCD and digitization should be stored in a large-capacity image data storage medium, and with this storage medium set in a personal computer to take in the image data, the line processing or the block processing described earlier should be performed using the image processing program. For instance, the output data from the black level circuit 105 in FIG. 3 may be stored as raw data at the PC card 34 so that image processing can be performed on the raw data by setting the PC card 34 in the personal computer.
  • FIG. 18 is a block diagram illustrating a configuration for using a personal computer to perform image processing as described above and to store the data in a storage device. Raw data of an image which has been captured in advance (output data from the black level circuit 105, for instance) are taken into a hard disk device 92 via an I/F circuit 91. In addition, a program for implementing the image processing described above via the I/F circuit 91 is stored in the hard disk device 92. The program may be stored in any of a variety of storage media, and by setting such a storage medium in a driver (not shown), the program is taken into the hard disk device 92. Alternatively, by connecting the hard disk device 92 or a personal computer 93 to the internet via the I/F circuit 91; a program may be downloaded via the internet.
  • Image processing as described above is performed by the personal computer 93 in FIG. 18 so that the image can be displayed on a monitor 94 or can be printed out by a printer 95. Compressed image data are stored in the hard disk device 92.
  • When performing image processing on a personal computer as described above, the program should be structured so that if the image data stored in the image data storage medium have already undergone white balance adjustment, only white balance fine adjustment processing is to be performed. In this case, information in regard to the focal point detection area utilized for the focus matching operation of the taking lens among the preset plurality of focal point detection areas should be also stored in the image data storage medium, so that the information can be utilized when selecting data corresponding to an image area related to the focal point detection area during the image processing performed on the personal computer. If, on the other hand, the image data stored in the image data storage medium have not yet undergone white balance adjustment, the program should be structured so that both the white balance adjustment processing and the white balance fine adjustment processing are implemented. In this case, the image-capturing data from the CCD, the color temperature information with respect to the subject detected at the white balance sensor 86 (35A) and the information with respect to the focal point detection area described above should also be stored in the image data storage medium so that the white balance adjustment processing and the white balance fine adjustment processing can be performed based upon these data.
  • The invention further includes, as another aspect, the control program (described above) that can be executed by the controller (e.g., a computer) to control the image processing apparatus as described above. The control program can be implemented in an application specific integrated circuit (ASIC). Alternatively, the control program can be transmitted by a carrier wave over a communications network such as, for example, the World Wide Web, and/or transmitted in a wireless fashion, for example, by radio waves or by infrared waves. The control program can also be transmitted by a carrier wave from a remote storage facility to a local control unit. In such an arrangement, the local control unit interacts with the remote storage facility to transfer all or part of the program, as needed, for execution by the local unit. Accordingly, or alternatively, the control program can be fixed in a computer-readable recording medium such as, for example, a CD-ROM, a computer hard drive, RAM, or other types of memories that are readily removable or intended to remain fixed within the computer.
  • It is to be noted that while the explanation has been given above in reference to a single lens reflex electronic still camera, the present invention may be adopted in an electronic still camera which does not allow lens exchange or in a digital video camera that is capable of taking in dynamic images as well. In addition, while the explanation has been given above on an example adopting the JPEG compression method, the present invention may be adopted when other compression methods are used. The other compression methods referred to here include compression achieved through the TIFF method, compression achieved through the Fractal method and compression achieved through the MPEG method. It is to be noted that the format processing as mentioned in this specification is not restricted to the format processing performed prior to the various types of compression processing described above, and may include non-compression TIFF format processing as well.
  • The circuit structures in the embodiments explained above merely represent examples and the circuit structure may assume the following modes, for instance.
  • (1) In reference to the G interpolation processing, the BPF processing, the LPF processing and the interpolation/LPF processing performed by the block processing circuit 200, the explanation has been given on the assumption that image processing is performed in units of individual blocks each constituted of 20×20, 16×16, 12×12 or 8×8 pixel areas. However, in the various types of processing, the image processing only needs to be performed in units of 5×5 image data blocks.
  • (2) While the closest focal point detection area among a plurality of focal point detection areas is automatically selected to calculate the gains RF-gain and BF-gain for white balance fine adjustment, the algorithm used for this selection process is not restricted to this example. Furthermore, the photographer may manually select one focal point detection area among the five focal point detection areas. In addition, white balance fine adjustment coefficients may be calculated using image data for a specific area with an area corresponding to a photometric area selected from a plurality of photometric areas located at its center. Moreover, an area may be specified using a touch sensor on, for instance, a monitor screen, so that white balance fine adjustment coefficients are calculated for the image data within a specific area defined based upon the image data within the specified area to perform white balance fine adjustment using the white balance fine adjustment coefficients on the next image data sampled.
  • In the electronic still camera described above, two types of data assuming different formats, i.e., irreversible compressed data obtained through JPEG or the like that have undergone a series of image processing and raw data output by the image-capturing device, can be recorded. The raw data are constituted of the 8-bit RGB data transmitted from a stage preceding the gamma control circuit 106 in FIG. 3, e.g., the white balance circuit 104 or the black level circuit 105, to the buffer memory 30. The irreversible compressed data are obtained by compressing the raw data through the JPEG method by using the brightness Y data, the color difference Cr, Cb data output by the block processing circuit 200.
  • FIG. 19 is a structural block diagram of an electronic camera 310 that is capable of recording data in the two different data formats described above in another embodiment. In FIG. 19, a taking lens 91 is mounted at the electronic camera 310. A light-receiving surface of an image-capturing device 311 is placed in the image space of the taking lens 91. A timing generator 312 supplies a control pulse for controlling the storage, the discharge, the read and the like of the electronic charges to the image-capturing device 311.
  • Image data output by the image-capturing device 311 are input to an image signal processor 314 via an A/D conversion unit 313. The timing generator 312 supplies the A/D conversion unit 313 and the image signal processor 314 with an operation clock OA.
  • The functions of the image signal processor 314 are achieved by adopting a configuration constituted of a signal level correction unit 315, a white balance adjustment unit 316, a gamma control unit 317, a color interpolation unit 318, a color difference conversion unit 319, a JPEG compression unit 20 and a mode control unit 21.
  • Image data output by the image signal processor 314 are input to a CPU 322. The CPU 322 transmits setting information for the operation mode to the mode control unit 321 in the image signal processor 314 and also supplies the image signal processor 314 with two operation clocks OB and OC.
  • An image memory 323 for temporarily storing image data is provided in the electronic camera 310. The image signal processor 314 and the CPU 322 access the image memory 323 via their own separate data buses. A monitor 325 for displaying monitor images which is connected to the CPU 322 via a monitor display circuit 24 is provided at the electronic camera 310.
  • The electronic camera 310 is provided with a card interface 326 connected to the CPU 322, which is detachably mounted with a memory card 327, a data terminal 329 through which data are exchanged with an external apparatus, an interface 328 that connects the data terminal 329 to the CPU 322 and an operating member 330 that includes a mode setting button through which various switch outputs are input to the CPU 322. Information indicating whether or not raw data are required is input through the mode setting button 330. If raw data are not required, a fast mode, which is to be detailed later, is set, whereas if raw data are required, an original image mode is set. In the fast mode, JPEG compressed data are recorded, and in the original image mode, raw data are output as well as recording the JPEG compressed data.
  • (Operation in Fast Mode)
  • The following is an explanation of the operation performed by the electronic camera 310 in the fast mode.
  • The user selects whether or not raw data are required by operating the mode setting button 330. The information thus set is transmitted to the mode control unit 321 via the CPU 322.
  • If the user operation indicates that no raw data are required, the mode control unit 321 selects the signal path corresponding to the fast mode for the image signal processor 314. FIG. 20 schematically illustrates the fast mode signal path. As illustrated in FIG. 20, the mode control unit 321 connects the A/D conversion unit 313, the signal level correction unit 315, the white balance adjustment unit 316 and the gamma control unit 317 so as to form a pipeline. Then, the mode control unit 321 supplies the operation clock OA from the timing generator 312 to the signal level correction unit 315, the white balance adjustment unit 316 and the gamma control unit 317 to set these processing units to engage in synchronous operations.
  • When image data are output by the image-capturing device 311, the image data undergo linear quantization at the A/D conversion unit 313 to be converted to 12-16-bit digitized image data. The digitized image data undergo clamp correction and gain control at the signal level correction unit 315, then undergo white balance adjustment at the white balance adjustment unit 316 and are sequentially output in units of single lines to the gamma control unit 317. The gamma control unit 317 performs gamma control on the image data and also outputs data achieved by reducing the number of quantization bits of the image data to approximately 8 bits.
  • The sequence of signal processing operations performed up to this point is identical to that performed by the line processing circuit 100 in FIG. 3, and is executed in real time in units of individual pixels for each line in synchronization with the operation clock OA supplied by the timing generator 312. The output from the gamma control unit 317 (non-linear processed data of approximately 8 bits) is temporarily stored in a storage area 323A in the image memory 323. At this time, the mode control unit 321 allocates a “raw data storage area 323C,” which is used in the original image mode to be detailed later, as part of the storage area 323A to increase the storage capacity of the storage area 323A. As a result, it becomes possible to hold image data corresponding to a plurality of frames that are undergoing the processing in retreat in the storage area 323A. Through this image data retreat operation, the electronic camera 310 is able to start signal processing of the next frame without having to wait for the signal processing to be completed.
  • The color interpolation unit 318 reads out the image data from the storage area 323A in units of predetermined blocks to execute color interpolation processing on the image data through local pixel calculation and to calculate the three color components, i.e., R, G and B for each of the pixels. The color difference conversion unit 319 sequentially converts the R, G and B components to color difference data constituted of a brightness Y and color differences Cr and Cb. The processing performed by the color interpolation unit 318 and the color difference conversion unit 319 is identical to that performed by the block processing circuit 200 in FIG. 4 and is executed in conformance to the operation clock OB supplied by the CPU 322.
  • The color difference data (Y, Cb, Cr) resulting from the conversion described above are temporarily stored in a storage area 323B in the image memory 323. In order to preview the captured image at this point, the monitor display circuit 324 reads out the color difference data (Y, Cb, Cr) in the storage area 323B via the CPU 322 and displays the captured image on the monitor 325.
  • The JPEG compression unit 320 reads out the color difference data (Y, Cb, Cr) from the storage area 323B and executes irreversible image compression (DCT conversion, quantization, coding) in synchronization with the operation clock OB. The image data resulting from the irreversible compression are recorded in the memory card 327 via the CPU 322 and the card interface 326.
  • Depending upon the compression rate setting, the CPU 322 may directly read out the color difference data (Y, Cb, Cr) from the storage area 323B to record them in the memory card 327 via the card interface 326.
  • The processing in the fast mode is completed when the operation described above ends.
  • (Operation in Original Image Mode)
  • If, on the other hand, the user operation indicates that raw data are required via the mode setting button 330, the mode control unit 321 selects the signal path corresponding to the original image mode for the image signal processor 314. FIG. 21 schematically illustrates the original image mode signal path.
  • As shown in FIG. 21, the mode control unit 321 sets the signal path so that the output from the white balance adjustment unit 316 is provided to the gamma control unit 317 via the image memory 323. Then, the mode control unit 321 supplies the signal level correction unit 315 and the white balance adjustment unit 316 with the operation clock OA from the timing generator 312. In addition, the mode control unit 321 switches the operation clock for the gamma control unit 317 to the operation clock OC, which is faster than the operation clock OA.
  • When image data are output by the image-capturing device 311, the image data undergo linear quantization at the A/D conversion unit 313 to be converted to 12˜16-bit digitized image data. The digitized image data sequentially undergo processing at the signal level correction unit 315 and the white balance adjustment unit 316, and then are temporarily stored in the storage area 323C in the image memory 323 as approximately 12˜16-bit raw data. The gamma control unit 317 performs gamma control while reading out the raw data from the storage area 323C in synchronization with the fast operation clock OC and outputs the processed data as non-linear processed data of approximately 8 bits.
  • The approximately 8-bit non-linear processed data are temporarily stored in the storage area 323A in the image memory 323. The color interpolation unit 318 reads out the image data from the storage area 323A in units of predetermined blocks to execute interpolation processing through local pixel calculation, and to calculate the three color components, i.e., R, G and B, for each pixel. The color difference conversion unit 319 sequentially converts the R, G and B components to color difference data, which is constituted of a brightness Y and color differences Cr and Cb.
  • The color difference data (Y, Cb, Cr) resulting from the conversion described above are temporarily stored in the storage area 323B in the image memory 323. The JPEG compression unit 20 executes image compression (DCT conversion, quantization, coding) in synchronization with the operation clock OB while reading out the color difference data (Y, Cb, Cr) as necessary from the storage area 323B. The image data achieved through the irreversible compression are then recorded in the memory card 327 via the CPU 322 and the card interface 326.
  • It is to be noted that depending upon the compression rate setting, the CPU 322 may directly read out the color difference data (Y, Cb, Cr) from the storage area 323B to record the color difference data in the memory card 327 via the card interface 326.
  • The raw data remain intact in the storage area 323C. The CPU 322 reads out the raw data and outputs the raw data thus read out through the data terminal 329 via the interface 328. As a result, the raw data are transferred and stored into an external storage medium or the like connected to the data terminal 329.
  • When the processing described above ends, the operation in the original image mode is completed.
  • As explained above, in this embodiment, the mode control unit 321 dynamically selects the signal path within the image signal processor 314. As a result, the raw data are stored in the storage area 323C in the original image mode so that they can be utilized later.
  • In the fast mode, on the other hand, the higher speed in the signal processing is achieved synergistically through the two high-speed functions described below.
  • (1) Raw data read/write at the image memory 323 is omitted.
    (2) The sequence of signal processing operations (image-capturing device 311->A/D conversion unit 313->signal level correction unit 315->white balance adjustment unit 316->gamma control unit 317->storage area 323A) is implemented in real time in synchronization with the operation clock OA.
  • Thus, it is possible to nearly complete the signal processing up to the gamma control in the fast mode within the period of time required in the original image mode to store the raw data in the image memory 323.
  • In the embodiment, the storage area 323C in an idling state is effectively utilized as part of the storage area 323A in the fast mode. As a result, it becomes possible to utilize the storage area 323A with a larger capacity as a retreat area during signal processing so that the photographing enabling intervals in the fast mode (in particular, during continuous photographing) can be greatly reduced.
  • In addition, when the mode control unit 321 selects the signal path for the original image mode, it also switches the operation clock of the gamma control unit 317 from the operation clock OA to the faster operation clock OC. Thus, the length of time required for gamma control processing is also minimized in the original image mode.
  • While an explanation is given above in reference to the embodiment on an example in which raw data are output to the outside in the original image mode, the present invention is not limited to this example. For instance, the CPU 322 may store raw data, either in the original state or in a reversibly compressed state, in a recording medium such as a PC card mounted at the electronic camera.
  • In addition, in the embodiment explained above, the storage area 323C in an idling state is effectively utilized as part of the storage area 323A in the fast mode so that the storage area 323A can be used as a buffer area for non-linear processed data of approximately 8 bits. However, the present invention is not limited to this example. In general, the storage area 323C, which is left in an idling state in the fast mode, may be utilized as a retreat area for image data undergoing processing, e.g., compressed image data yet to be recorded in the memory card and image data in the process of being compressed. When such a structure is adopted, too, a photographing operation for the next frame can be started without having to wait for the completion of the image data processing.
  • Furthermore, while the output from the white balance adjustment unit 316 is recorded in the area 323A of the image memory 323 as raw data in the original image mode in the embodiment described above, the present invention is not limited to this example. As a general rule, as long as image data have not undergone any irreversible signal processing such as a reduction of the number of quantization bits, gradation conversion and pixel thinning, they can be used as raw data faithful to the original image. Consequently, an output from the A/D conversion unit 313, an output from the signal level correction unit 315, a signal manifesting immediately after black level correction or the like may be recorded in the image memory 323 as raw data, instead.
  • Moreover, while the raw data are extracted from the stage preceding the gamma control unit 317, which performs irreversible gradation conversion in the embodiment, the present invention is not limited to this example. For instance, if the electronic camera is provided with a signal processing unit that engages in irreversible pixel thinning, raw data may be extracted from a stage preceding the signal processing unit.
  • The following advantages are achieved by adopting the embodiment.
  • (1) In the fast mode, output data from the gamma control unit 317, which is a non-reciprocal circuit, are first recorded in the image memory 323, whereas in the original image mode, output data from the WB adjustment unit 316 at the stage preceding the gamma control unit 317, which is a non-reciprocal circuit, are temporarily recorded in the image memory 323. In other words, through the control implemented by the mode control unit 321, the data path is dynamically switched. As a result, the length of signal processing time is reduced in the fast mode while reliable utilization of raw data is achieved in the original image mode.
    (2) The raw data storage area used in the original image mode is effectively utilized as a data retreat area in the fast mode. As a result, the data retreat area is substantially enlarged to achieve a further reduction in the photographing enabling intervals in the fast mode.
    (3) When the user specifies raw data utilization through an operation of the mode setting button 330, the original mode is selected, whereas the fast mode is selected if the user does not specify raw data utilization. Thus, correct selection can be made between the fast mode and the original image mode in correspondence to whether or not the use of raw data is required.
    (4) When the signal path for the original image mode is selected, the operation clock of the gamma control unit 317 is switched to a faster operation clock. Consequently, a maximum speed in the signal processing is achieved in the original image mode, as well.

Claims (1)

1. A digital camera comprising:
an image-capturing device that captures a subject image having passed through a taking lens and outputs image data;
a white balance adjustment circuit that performs white balance adjustment on the image data output by said image-capturing device;
a white balance fine adjustment coefficient calculation circuit that calculates white balance fine adjustment coefficients based upon image data having undergone white balance adjustment output by said white balance adjustment circuit; and
a white balance fine adjustment circuit that performs white balance fine adjustment on the image data having undergone the white balance adjustment output by said white balance adjustment circuit using said white balance fine adjustment coefficients.
US13/067,811 1998-06-30 2011-06-28 Digital camera and storage medium for image signal processing for white balance control Abandoned US20110261224A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/067,811 US20110261224A1 (en) 1998-06-30 2011-06-28 Digital camera and storage medium for image signal processing for white balance control
US13/848,424 US8878956B2 (en) 1998-06-30 2013-03-21 Digital camera and storage medium for image signal processing for white balance control

Applications Claiming Priority (16)

Application Number Priority Date Filing Date Title
JP10-183921 1998-06-30
JP10183921A JP2000023085A (en) 1998-06-30 1998-06-30 Digital camera and image signal processing storage medium
JP10-183920 1998-06-30
JP10183920A JP2000023184A (en) 1998-06-30 1998-06-30 Digital camera and picture signal processing storage medium
JP10183918A JP2000023083A (en) 1998-06-30 1998-06-30 Digital camera and storage medium for image signal processing
JP10-183918 1998-06-30
JP10-183919 1998-06-30
JP10183919A JP2000023084A (en) 1998-06-30 1998-06-30 Digital camera and storage medium for image signal processing
JP10-237321 1998-08-24
JP23732198A JP4182566B2 (en) 1998-08-24 1998-08-24 Digital camera and computer-readable recording medium
US34251299A 1999-06-29 1999-06-29
JP11-213299 1999-07-28
JP21329999A JP4281161B2 (en) 1999-07-28 1999-07-28 Electronic camera
US09/497,482 US7253836B1 (en) 1998-06-30 2000-02-04 Digital camera, storage medium for image signal processing, carrier wave and electronic camera
US11/819,666 US20070268379A1 (en) 1998-06-30 2007-06-28 Digital camera, storage medium for image signal processing, carrier wave and electronic camera
US13/067,811 US20110261224A1 (en) 1998-06-30 2011-06-28 Digital camera and storage medium for image signal processing for white balance control

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/819,666 Continuation US20070268379A1 (en) 1998-06-30 2007-06-28 Digital camera, storage medium for image signal processing, carrier wave and electronic camera

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/848,424 Continuation US8878956B2 (en) 1998-06-30 2013-03-21 Digital camera and storage medium for image signal processing for white balance control

Publications (1)

Publication Number Publication Date
US20110261224A1 true US20110261224A1 (en) 2011-10-27

Family

ID=38324370

Family Applications (5)

Application Number Title Priority Date Filing Date
US09/497,482 Expired - Lifetime US7253836B1 (en) 1998-06-30 2000-02-04 Digital camera, storage medium for image signal processing, carrier wave and electronic camera
US11/819,666 Abandoned US20070268379A1 (en) 1998-06-30 2007-06-28 Digital camera, storage medium for image signal processing, carrier wave and electronic camera
US11/819,994 Expired - Fee Related US7808533B2 (en) 1998-06-30 2007-06-29 Electronic camera having signal processing units that perform signal processing on image data
US13/067,811 Abandoned US20110261224A1 (en) 1998-06-30 2011-06-28 Digital camera and storage medium for image signal processing for white balance control
US13/848,424 Expired - Fee Related US8878956B2 (en) 1998-06-30 2013-03-21 Digital camera and storage medium for image signal processing for white balance control

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/497,482 Expired - Lifetime US7253836B1 (en) 1998-06-30 2000-02-04 Digital camera, storage medium for image signal processing, carrier wave and electronic camera
US11/819,666 Abandoned US20070268379A1 (en) 1998-06-30 2007-06-28 Digital camera, storage medium for image signal processing, carrier wave and electronic camera
US11/819,994 Expired - Fee Related US7808533B2 (en) 1998-06-30 2007-06-29 Electronic camera having signal processing units that perform signal processing on image data

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/848,424 Expired - Fee Related US8878956B2 (en) 1998-06-30 2013-03-21 Digital camera and storage medium for image signal processing for white balance control

Country Status (1)

Country Link
US (5) US7253836B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090154892A1 (en) * 2007-12-14 2009-06-18 Samsung Techwin Co., Ltd. Recording and reproduction apparatus and methods, and a storing medium having recorded thereon computer program to perform the methods

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1365580A4 (en) * 2001-01-31 2006-10-11 Sanyo Electric Co Image processing circuit
US7480083B2 (en) * 2002-07-30 2009-01-20 Canon Kabushiki Kaisha Image processing system, apparatus, and method, and color reproduction method
JP2004070715A (en) * 2002-08-07 2004-03-04 Seiko Epson Corp Image processor
JP3768182B2 (en) * 2002-10-23 2006-04-19 三洋電機株式会社 Electronic camera
US20040196389A1 (en) * 2003-02-04 2004-10-07 Yoshiaki Honda Image pickup apparatus and method thereof
US7499081B2 (en) * 2003-04-30 2009-03-03 Hewlett-Packard Development Company, L.P. Digital video imaging devices and methods of processing image data of different moments in time
WO2005034504A1 (en) * 2003-09-30 2005-04-14 Mitsubishi Denki Kabushiki Kaisha Image pickup device and image pickup method
JP2005249959A (en) * 2004-03-02 2005-09-15 Casio Comput Co Ltd Imaging device, luminescence control method used for the same, and program
US20050275737A1 (en) * 2004-06-10 2005-12-15 Cheng Brett A System and method for effectively utilizing a live preview mode in an electronic imaging device
JP2006005477A (en) * 2004-06-15 2006-01-05 Canon Inc Imaging device, imaging method, and program
JP4464241B2 (en) * 2004-10-12 2010-05-19 Hoya株式会社 White balance adjustment device
US20060119724A1 (en) * 2004-12-02 2006-06-08 Fuji Photo Film Co., Ltd. Imaging device, signal processing method on solid-state imaging element, digital camera and controlling method therefor and color image data generating method
KR100636971B1 (en) * 2004-12-30 2006-10-19 매그나칩 반도체 유한회사 Apparatus for generation of focus data in image sensor and method for generation the same
US9769354B2 (en) 2005-03-24 2017-09-19 Kofax, Inc. Systems and methods of processing scanned data
US9137417B2 (en) 2005-03-24 2015-09-15 Kofax, Inc. Systems and methods for processing video data
US7545529B2 (en) * 2005-03-24 2009-06-09 Kofax, Inc. Systems and methods of accessing random access cache for rescanning
JP4500229B2 (en) * 2005-08-01 2010-07-14 富士フイルム株式会社 Imaging device
JP4935049B2 (en) * 2005-10-27 2012-05-23 セイコーエプソン株式会社 Image processing apparatus, image processing method, and image processing program
CN101689357B (en) 2007-04-11 2015-03-04 Red.Com公司 Video camera
US8237830B2 (en) 2007-04-11 2012-08-07 Red.Com, Inc. Video camera
US8497928B2 (en) * 2007-07-31 2013-07-30 Palm, Inc. Techniques to automatically focus a digital camera
WO2009069254A1 (en) 2007-11-27 2009-06-04 Panasonic Corporation Dynamic image reproduction device, digital camera, semiconductor integrated circuit, and dynamic image reproduction method
US9767354B2 (en) 2009-02-10 2017-09-19 Kofax, Inc. Global geographic information retrieval, validation, and normalization
US8774516B2 (en) 2009-02-10 2014-07-08 Kofax, Inc. Systems, methods and computer program products for determining document validity
US8958605B2 (en) 2009-02-10 2015-02-17 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9576272B2 (en) 2009-02-10 2017-02-21 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9349046B2 (en) 2009-02-10 2016-05-24 Kofax, Inc. Smart optical input/output (I/O) extension for context-dependent workflows
KR101044207B1 (en) * 2009-06-15 2011-06-29 엘지전자 주식회사 Cooker and method for controlling the same
KR101044147B1 (en) 2009-06-15 2011-06-24 엘지전자 주식회사 Cooker and method for controlling the same
JP5045731B2 (en) * 2009-11-04 2012-10-10 カシオ計算機株式会社 Imaging apparatus, white balance setting method, and program
US20110167092A1 (en) * 2010-01-06 2011-07-07 Baskaran Subramaniam Image caching in a handheld device
US20110285745A1 (en) * 2011-05-03 2011-11-24 Texas Instruments Incorporated Method and apparatus for touch screen assisted white balance
JP2012109900A (en) * 2010-11-19 2012-06-07 Aof Imaging Technology Ltd Photographing device, photographing method and program
JP5804856B2 (en) * 2011-09-07 2015-11-04 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP5804857B2 (en) 2011-09-07 2015-11-04 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP5911241B2 (en) 2011-09-07 2016-04-27 キヤノン株式会社 Image processing apparatus, image processing method, and program
US10146795B2 (en) 2012-01-12 2018-12-04 Kofax, Inc. Systems and methods for mobile image capture and processing
US9058580B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US8855375B2 (en) 2012-01-12 2014-10-07 Kofax, Inc. Systems and methods for mobile image capture and processing
US9058515B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9483794B2 (en) 2012-01-12 2016-11-01 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9135684B2 (en) * 2012-11-12 2015-09-15 Marvell World Trade Ltd. Systems and methods for image enhancement by local tone curve mapping
US9167160B2 (en) * 2012-11-14 2015-10-20 Karl Storz Imaging, Inc. Image capture stabilization
CN104969545B (en) * 2013-02-05 2018-03-20 富士胶片株式会社 Image processing apparatus, camera device, image processing method and program
JP2016508700A (en) 2013-02-14 2016-03-22 レッド.コム,インコーポレイテッド Video camera
US9355312B2 (en) 2013-03-13 2016-05-31 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
WO2014160426A1 (en) 2013-03-13 2014-10-02 Kofax, Inc. Classifying objects in digital images captured using mobile devices
US9208536B2 (en) 2013-09-27 2015-12-08 Kofax, Inc. Systems and methods for three dimensional geometric reconstruction of captured image data
US20140316841A1 (en) 2013-04-23 2014-10-23 Kofax, Inc. Location-based workflows and services
JP2016518790A (en) 2013-05-03 2016-06-23 コファックス, インコーポレイテッド System and method for detecting and classifying objects in video captured using a mobile device
KR102023501B1 (en) 2013-10-02 2019-09-20 삼성전자주식회사 System on chip including configurable image processing pipeline, and system including the same
CN104581104A (en) * 2013-10-29 2015-04-29 吴福吉 White balance color temperature measurement device for image pick-up devices
US20150116535A1 (en) * 2013-10-30 2015-04-30 Fu-Chi Wu White-balance color temperature measuring device for image pick-up device
WO2015073920A1 (en) 2013-11-15 2015-05-21 Kofax, Inc. Systems and methods for generating composite images of long documents using mobile video data
US9760788B2 (en) 2014-10-30 2017-09-12 Kofax, Inc. Mobile document detection and orientation based on reference object characteristics
US10242285B2 (en) 2015-07-20 2019-03-26 Kofax, Inc. Iterative recognition-guided thresholding and data extraction
US9779296B1 (en) 2016-04-01 2017-10-03 Kofax, Inc. Content-based detection and three dimensional geometric reconstruction of objects in image and video data
CN106973188A (en) 2017-04-11 2017-07-21 北京图森未来科技有限公司 A kind of image transmission and method
KR102620350B1 (en) 2017-07-05 2024-01-02 레드.컴, 엘엘씨 Video image data processing in electronic devices
US10803350B2 (en) 2017-11-30 2020-10-13 Kofax, Inc. Object detection and image cropping using a multi-detector approach
DE102021117548A1 (en) * 2020-07-16 2022-01-20 Samsung Electronics Co., Ltd. IMAGE SENSOR MODULE, IMAGE PROCESSING SYSTEM, AND IMAGE COMPRESSION METHOD
JP2022150350A (en) * 2021-03-26 2022-10-07 セイコーエプソン株式会社 Image processing circuit, circuit arrangement, and electronic apparatus
JP2023083897A (en) * 2021-12-06 2023-06-16 キヤノン株式会社 Electronic apparatus and control method of the same
CN115049541B (en) * 2022-07-14 2024-05-07 广州大学 Reversible gray scale method, system and device based on neural network and image steganography

Family Cites Families (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US613010A (en) * 1898-10-25 William h
JPS62185489A (en) 1986-02-10 1987-08-13 Hitachi Ltd Color television camera device
JP2529210B2 (en) 1986-07-15 1996-08-28 松下電器産業株式会社 White balance adjustment device
US4774565A (en) * 1987-08-24 1988-09-27 Polaroid Corporation Method and apparatus for reconstructing missing color samples
JP2703931B2 (en) 1988-07-13 1998-01-26 キヤノン株式会社 Imaging device
US5319451A (en) * 1988-05-31 1994-06-07 Canon Kabushiki Kaisha Color signal processing apparatus using a common low pass filter for the luminance signal and the color signals
US4945406A (en) 1988-11-07 1990-07-31 Eastman Kodak Company Apparatus and accompanying methods for achieving automatic color balancing in a film to video transfer system
JP3089422B2 (en) 1989-02-14 2000-09-18 キヤノン株式会社 Imaging device
JP2782527B2 (en) 1989-04-20 1998-08-06 キヤノン株式会社 Still image pickup device
US5016107A (en) 1989-05-09 1991-05-14 Eastman Kodak Company Electronic still camera utilizing image compression and digital storage
JP3226271B2 (en) * 1989-07-27 2001-11-05 オリンパス光学工業株式会社 Digital electronic still camera
JPH03148990A (en) 1989-11-02 1991-06-25 Minolta Camera Co Ltd White balance adjusting device
JPH03154487A (en) 1989-11-10 1991-07-02 Konica Corp Digital still video camera
US5206730A (en) * 1989-11-10 1993-04-27 Konica Corporation Still video camera having one-shot and serial shot modes
JP3028830B2 (en) 1990-05-18 2000-04-04 株式会社日立製作所 Video camera equipment
US5305096A (en) * 1990-07-31 1994-04-19 Canon Kabushiki Kaisha Image signal processing apparatus using color filters and an image pick-up device providing, interlaced field signals
JPH0488782A (en) 1990-07-31 1992-03-23 Canon Inc Image pickup device
JP2961719B2 (en) 1990-08-28 1999-10-12 富士写真フイルム株式会社 Image signal processing circuit
JPH05176333A (en) 1991-12-19 1993-07-13 Canon Inc Image signal processing circuit
US5343243A (en) * 1992-01-07 1994-08-30 Ricoh Company, Ltd. Digital video camera
JPH05252522A (en) 1992-01-07 1993-09-28 Ricoh Co Ltd Digital video camera
US5402171A (en) * 1992-09-11 1995-03-28 Kabushiki Kaisha Toshiba Electronic still camera with improved picture resolution by image shifting in a parallelogram arrangement
JPH06303533A (en) * 1993-04-09 1994-10-28 Sony Corp Image sensor and electronic still camera
US5373322A (en) * 1993-06-30 1994-12-13 Eastman Kodak Company Apparatus and method for adaptively interpolating a full color image utilizing chrominance gradients
JP3764493B2 (en) * 1993-09-20 2006-04-05 ソニー株式会社 Electronic still camera and image data processing method
JPH07107505A (en) 1993-09-29 1995-04-21 Canon Inc Image pickup device
US5640619A (en) * 1993-12-14 1997-06-17 Nikon Corporation Multiple point focus detection camera
JP3091358B2 (en) 1994-01-31 2000-09-25 日立電子株式会社 Television camera equipment
JP3796269B2 (en) 1994-04-15 2006-07-12 富士写真フイルム株式会社 Electronic still camera and operation control method thereof
JPH07322120A (en) * 1994-05-20 1995-12-08 Canon Inc Image pickup device
US5828406A (en) * 1994-12-30 1998-10-27 Eastman Kodak Company Electronic camera having a processor for mapping image pixel signals into color display pixels
JPH08214145A (en) 1995-02-03 1996-08-20 Canon Inc Image processing unit and its method
JP3542653B2 (en) * 1995-02-14 2004-07-14 富士写真フイルム株式会社 Image data transmission system for electronic still camera
JPH0918773A (en) 1995-06-27 1997-01-17 Canon Inc Image pickup device
JP3096618B2 (en) * 1995-08-10 2000-10-10 三洋電機株式会社 Imaging device
US5778106A (en) * 1996-03-14 1998-07-07 Polaroid Corporation Electronic camera with reduced color artifacts
JP3223103B2 (en) 1996-03-25 2001-10-29 シャープ株式会社 Imaging device
JPH09322191A (en) 1996-03-29 1997-12-12 Ricoh Co Ltd Image input device
JPH09270991A (en) 1996-03-29 1997-10-14 Toshiba Corp Video recording device
US6421083B1 (en) * 1996-03-29 2002-07-16 Sony Corporation Color imaging device and method
US5867214A (en) * 1996-04-11 1999-02-02 Apple Computer, Inc. Apparatus and method for increasing a digital camera image capture rate by delaying image processing
JP3829363B2 (en) * 1996-06-14 2006-10-04 コニカミノルタホールディングス株式会社 Electronic camera
JPH104858A (en) 1996-06-21 1998-01-13 Takenaka Komuten Co Ltd Device for preventing birds from flying and coming
US6005613A (en) * 1996-09-12 1999-12-21 Eastman Kodak Company Multi-mode digital camera with computer interface using data packets combining image and mode data
JP3253536B2 (en) 1996-09-30 2002-02-04 三洋電機株式会社 Electronic still camera
JPH10136244A (en) * 1996-11-01 1998-05-22 Olympus Optical Co Ltd Electronic image pickup device
US5917556A (en) * 1997-03-19 1999-06-29 Eastman Kodak Company Split white balance processing of a color image
US6529238B1 (en) * 1997-09-05 2003-03-04 Texas Instruments Incorporated Method and apparatus for compensation of point noise in CMOS imagers
US6532039B2 (en) * 1997-09-17 2003-03-11 Flashpoint Technology, Inc. Method and system for digital image stamping
US6288743B1 (en) * 1997-10-20 2001-09-11 Eastman Kodak Company Electronic camera for processing image segments
US20020176009A1 (en) * 1998-05-08 2002-11-28 Johnson Sandra Marie Image processor circuits, systems, and methods
JP2000023184A (en) 1998-06-30 2000-01-21 Nikon Corp Digital camera and picture signal processing storage medium
US6359643B1 (en) * 1998-08-31 2002-03-19 Intel Corporation Method and apparatus for signaling a still image capture during video capture
JP2000293145A (en) 1999-04-06 2000-10-20 Canon Inc Picture display device and its method
US6954228B1 (en) * 1999-07-23 2005-10-11 Intel Corporation Image processing methods and apparatus
JP4501645B2 (en) 2004-11-17 2010-07-14 パナソニック株式会社 Toilet equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090154892A1 (en) * 2007-12-14 2009-06-18 Samsung Techwin Co., Ltd. Recording and reproduction apparatus and methods, and a storing medium having recorded thereon computer program to perform the methods

Also Published As

Publication number Publication date
US8878956B2 (en) 2014-11-04
US7253836B1 (en) 2007-08-07
US20070268379A1 (en) 2007-11-22
US20130242132A1 (en) 2013-09-19
US7808533B2 (en) 2010-10-05
US20070252903A1 (en) 2007-11-01

Similar Documents

Publication Publication Date Title
US8878956B2 (en) Digital camera and storage medium for image signal processing for white balance control
US8896722B2 (en) Image data processing apparatus and electronic camera
US7176962B2 (en) Digital camera and digital processing system for correcting motion blur using spatial frequency
US7944487B2 (en) Image pickup apparatus and image pickup method
US20020008760A1 (en) Digital camera, image signal processing method and recording medium for the same
EP1246453A2 (en) Signal processing apparatus and method, and image sensing apparatus
JPH10126796A (en) Digital camera for dynamic and still images using dual mode software processing
WO2008150017A1 (en) Signal processing method and signal processing device
JPH10248068A (en) Image pickup device and image processor
JP2004328117A (en) Digital camera and photographing control method
US20020001410A1 (en) Image processing apparatus
JP4182566B2 (en) Digital camera and computer-readable recording medium
JP2008141658A (en) Electronic camera and image processor
US7256827B1 (en) Image reading device with thinned pixel data
JP4225795B2 (en) Imaging system, image processing program
JP3115912B2 (en) Image recording device
JPH07123421A (en) Image pickup device
EP0998130B1 (en) Digital camera and image processing method
JP3563508B2 (en) Automatic focusing device
JP4243412B2 (en) Solid-state imaging device and signal processing method
JP3540567B2 (en) Electronic imaging device
JP4687750B2 (en) Digital camera and image signal processing storage medium
JP3954204B2 (en) Signal processing apparatus and signal processing method thereof
JP2002209224A (en) Image processing unit, image processing method and recording medium
JP2002330387A (en) Electronic camera

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION