WO2006137216A1 - Imaging device and gradation converting method for imaging method - Google Patents

Imaging device and gradation converting method for imaging method Download PDF

Info

Publication number
WO2006137216A1
WO2006137216A1 PCT/JP2006/308713 JP2006308713W WO2006137216A1 WO 2006137216 A1 WO2006137216 A1 WO 2006137216A1 JP 2006308713 W JP2006308713 W JP 2006308713W WO 2006137216 A1 WO2006137216 A1 WO 2006137216A1
Authority
WO
WIPO (PCT)
Prior art keywords
output
signal
correction
imaging
luminance distribution
Prior art date
Application number
PCT/JP2006/308713
Other languages
French (fr)
Japanese (ja)
Inventor
Kozo Ishida
Junko Makita
Shoutarou Moriya
Takashi Itow
Tetsuya Kuno
Hiroaki Sugiura
Original Assignee
Mitsubishi Denki Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Denki Kabushiki Kaisha filed Critical Mitsubishi Denki Kabushiki Kaisha
Priority to JP2006519004A priority Critical patent/JP4279313B2/en
Publication of WO2006137216A1 publication Critical patent/WO2006137216A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/407Control or modification of tonal gradation or of extreme levels, e.g. background level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present invention relates to an image pickup apparatus that performs a predetermined gradation conversion process on image data having a signal value for each pixel obtained by an image pickup device, and more particularly to a character on a document, a human face.
  • the present invention relates to an image pickup apparatus and an image pickup method having gradation correction means suitable for detecting small changes in luminance in details of a subject, such as features, blood vessel patterns, and fingerprint patterns.
  • a gradation (correction) control of a conventional imaging device a plurality of nonlinear gradation conversion characteristics are provided, and the gradation conversion characteristics are selected according to the luminance of the image (maximum brightness average value). Some have realized an expansion of the dynamic range (see, for example, Patent Document 1).
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2004-363726 (page 9, FIG. 5)
  • An imaging apparatus includes a solid-state imaging device, a luminance distribution detecting unit that detects a luminance distribution and a maximum level and a minimum level of an imaging signal proportional to an imaging output obtained from the solid-state imaging device. And correction means for performing at least one of offset correction and gain correction on the imaging signal, and the correction based on the maximum level and the minimum level of the luminance distribution detected by the luminance distribution detection means! Correction amount determining means for controlling the correction amount so as to expand the range of change of the corrected imaging signal output from the means.
  • a subject shape recognition means for recognizing the shape of the subject on the screen from the feature quantity of the imaging signal proportional to the imaging output obtained from the solid-state imaging device and outputting a subject area signal indicating the subject area.
  • the luminance distribution detecting means detects the luminance distribution in the subject area based on the subject area signal output from the subject shape recognizing means, and the correction amount determining means is the subject area detected by the luminance distribution detecting means. On the basis of the maximum level and the minimum level of the luminance distribution, the correction amount is controlled so as to expand the change range of the corrected imaging signal in the subject area output from the correction means.
  • the offset correction, gain correction, and gradation conversion characteristics of the imaging signal are adaptively controlled based on the luminance distribution of the imaging signal and the maximum and minimum levels. Therefore, a small luminance difference in the details of the image can be reproduced clearly.
  • the detector detects the shape of the subject in the captured image and adapts the offset correction, gain correction, and gradation conversion characteristics of the imaged signal based on the luminance distribution of the imaged signal in the subject area and its maximum and minimum levels. Therefore, the small brightness difference in the details of the subject image can be reproduced more vividly.
  • FIG. 1 is a block diagram showing an imaging apparatus according to Embodiment 1 of the present invention.
  • FIG. 2 (A) to (C) are diagrams showing an example of conversion characteristics of the gradation conversion means of the first embodiment.
  • FIG. 3 is a block diagram showing an example of luminance distribution detection means used in the first embodiment.
  • FIG. 4 is a diagram showing an example of a histogram generated by the histogram generation unit of FIG.
  • FIG. 5 is a diagram for explaining a method for determining an offset correction amount and a gain correction amount by the correction amount determining means in FIG. 1.
  • FIG. 6 is an example of a table used when the gain correction amount is obtained from the minimum level value detected by the luminance distribution detecting means in FIG. 1.
  • FIG. 7 (A) and (B) are diagrams showing an example of control of the gradation converting means by the correction amount determining means in the first embodiment.
  • 8] are diagrams showing an example of control of the gradation converting means by the correction amount determining means in the first embodiment.
  • FIG. 9 (A) to (G) are timing charts showing the operation of the imaging apparatus of the first embodiment.
  • FIG. 10 is a flowchart showing a processing procedure in the first embodiment.
  • FIG. 11] (A) to (C) are diagrams showing the influence of correction according to Embodiment 1 on a signal after gradation conversion.
  • FIGS. 12A to 12C are diagrams showing an example of the gradation correction operation according to the first embodiment.
  • FIG. 13 is a diagram showing an example of screen division in the second embodiment of the present invention.
  • FIG. 14 is a flowchart showing a processing procedure in the second embodiment.
  • FIG. 15 (A) to (G) are timing charts showing the operation of the third embodiment of the present invention.
  • FIG. 16 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 4 of the present invention.
  • FIG. 17 is a flowchart showing a processing procedure in the fourth embodiment.
  • FIG. 18] (A) to (G) are timing charts showing the operation of the imaging apparatus of the fourth embodiment.
  • FIG. 19 is a block diagram showing Embodiment 5 of the present invention.
  • FIGS. 20A to 20G are timing charts showing the operation of the fifth embodiment.
  • FIG. 21 is a flowchart showing a processing procedure in the fifth embodiment.
  • FIG. 22 is a block diagram showing an example of luminance distribution means used in the sixth embodiment.
  • FIG. 23 (A) and (B) are waveform diagrams showing the operation of the luminance distribution means 31 of FIG.
  • FIG. 24 is a block diagram showing an example of amplitude detection means in FIG.
  • FIG. 25 is a diagram showing an example of a screen including areas with different brightness.
  • FIGS. 26A to 26C are diagrams showing an example of the gradation correction operation according to the sixth embodiment. Explanation of symbols
  • 1 solid-state imaging device 2 amplification means, 3 AZD conversion means, 4 exposure control means, 5 imaging signal memory means, 6 correction means, 61 offset correction means, 62 gain correction means, 7 correction control means, 8 gradation conversion means, 9 Image processing means, 10 subjects Shape recognition means, 11 luminance distribution detection means, 12 correction amount determination means.
  • FIG. 1 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 1 of the present invention.
  • the imaging apparatus according to Embodiment 1 includes a solid-state imaging device 1, an amplification unit 2, an AZD conversion unit 3, an exposure control unit 4, an imaging signal memory unit 5, a correction unit 6, and a correction control unit 7.
  • the correction unit 6 includes an offset correction unit 61 and a gain correction unit 62, and the correction control unit 7 includes a luminance distribution detection unit 11 and a correction amount determination unit 12.
  • the solid-state imaging device 1 photoelectrically converts light, which has also entered subject power, for each pixel, and outputs an output signal of the solid-state imaging device (sometimes referred to as "imaging output").
  • the solid-state imaging device 1 has a photoelectric conversion element that forms a plurality of pixels arranged two-dimensionally, that is, in the horizontal direction and the vertical direction, and the photoelectric conversion element has a size corresponding to the amount of incident light.
  • the analog signal is output.
  • the output of the signal from the solid-state imaging device 1 is sequentially output with the pixel forces aligned in the horizontal direction on each horizontal line, and such a signal for each pixel is sometimes called a pixel signal.
  • the solid-state imaging device 1 is, for example, a CCD imaging device, and performs control corresponding to shutter speed and shutter opening / closing by controlling the photocharge accumulation time and timing of taking out the accumulated charge. Can do. Therefore, the control of the charge accumulation time for the solid-state imaging device 1 is sometimes called an electronic shutter function. In this embodiment, the exposure control means 4 controls the charge accumulation time.
  • the solid-state imaging device 1 may be a monochrome sensor or a color sensor.
  • the color sensor may be a primary color filter array or a complementary color filter array. Further, the solid-state imaging device 1 may use a CMOS imaging device.
  • the luminance signal Y of the image sensor using the primary color filter is
  • the Y signal is used as it is.
  • the amplification means 2 amplifies the imaging output of the solid-state imaging device 1 and outputs an imaging signal proportional to the imaging output.
  • the output of the amplifying means 2 may be referred to as an amplified output.
  • the amplification gain of the amplification means 2 is controlled by the exposure control means 4.
  • the AZD conversion means 3 performs AZD (analog-digital) conversion on the output of the amplification means 2 and outputs a digital signal (sometimes called "AZD output” or "imaging data") Sc.
  • the digital signal represents the luminance for each pixel, and the digital signal for each pixel is sometimes called pixel data.
  • the AZD output Sc is also an imaging signal proportional to the imaging output of the solid-state imaging device 1.
  • the exposure control means 4 determines the exposure amount of the solid-state imaging device 1 and the amplification gain of the amplification means 2 according to the luminance (Y) signal of the AZD output proportional to the imaging output or green (the value of the GH symbol). Control.
  • the exposure amount control of the present embodiment is realized by controlling the charge accumulation time of the solid-state imaging device 1, but instead, the control of the light amount of the light source that illuminates the subject and the opening value of the aperture are adjusted. It may be realized by iris control or a combination of these.
  • the imaging signal memory means 5 has a memory capacity for holding AZD output (imaging data) in units of frames, and was written one frame before writing one frame of imaging signals to the memory. An imaging signal can be read out.
  • the signal Sd read from the imaging signal memory means 5 may be referred to as “imaging signal memory output”.
  • the offset correction means 61 is composed of, for example, a subtracter, and subtracts the offset correction amount (Kb) from the correction amount determination means 12 from the output of the imaging signal memory 5, and the gain correction means Output to 62.
  • the output of the offset correction means 61 may be referred to as “offset correction output”.
  • the gain correction means 62 is constituted by a multiplier, for example, and the gain correction from the correction amount determination means 12 is performed. Multiply the positive amount (Ka) by the output of the offset correction means 61, and the multiplication result is the gradation conversion means.
  • the output Sf of the gain correction means 62 may be referred to as a “corrected imaging signal”.
  • the gradation converting means 8 performs gradation conversion on the corrected imaging signal Sf and outputs it.
  • the output Sg of the gradation conversion means 8 is sometimes called “gradation conversion output”.
  • FIG. 2A shows an example of the gradation conversion characteristics of the gradation converting means 8.
  • the horizontal axis is input (referred to as “gradation conversion input”), and the vertical axis is output (referred to as “gradation conversion output”).
  • gradation conversion input input
  • gradation conversion output output
  • the input is a 10-bit number, the minimum is 0, the maximum is 1023, and the output is an 8-bit number, the minimum is 0, and the maximum is 255.
  • FIG. 2 (B) below the horizontal axis of the gradation conversion characteristics in Fig. 2 (A) shows an example of the input signal Sf, and the output signal Sg for such an input signal is represented by a gradation conversion characteristic curve. It is shown in Fig. 2 (C) on the right side of.
  • the increase in output relative to the increase in input is relatively large (the slope of the characteristic curve is relatively steep), and the input is large.
  • the slope of the characteristic curve gradually becomes gentler.
  • the change of the output with respect to the input is linear when the input is less than the maximum value of approximately 1Z2.
  • the tone conversion characteristics can be ⁇ characteristics, polygonal line characteristics, and the like, and are determined in consideration of various imaging conditions such as monitor characteristics, subject conditions, illumination light conditions, and the like.
  • the present invention emphasizes the change in the signal by making the slope of the characteristic curve steeper than that of the other parts of the input range, that is, the first part where the slope of the characteristic polarity is relatively gentle. This can be applied to the case where it has (range) and the second part (range) where the slope of the characteristic curve is relatively steep.
  • the gradation conversion means 8 can be realized by a look-up table (LUT), for example. Further, when the conversion characteristic curve is represented by a broken line, it can be realized by a relatively simple calculation, and can also be realized by hardware logic.
  • LUT look-up table
  • the output Kc of the correction amount determination means 8 is used to switch the gradation conversion characteristics of the gradation conversion means 8. Switching signal.
  • the output Kc of the correction amount determining means 8 is the coordinate value (input value and output) of the broken point position for realizing the polygonal line characteristic. Value) etc. may be supplied to the gradation converting means 8 in units of frames. In this case, there is an effect that the gate scale and the memory capacity can be reduced because the gradation converting means 8 does not need to have a storage area for realizing a plurality of gradation conversion characteristics.
  • the image processing means 9 includes white balance control, output power of the solid-state image pickup device 1 having a Bayer array or a No-cam array, interpolation signal processing for generating an RGB signal of each pixel, noise removal, RGB signal It has functions to realize general camera signal processing, such as conversion from YCbCr signals to YCbCr signals and image quality improvement processing for display on a monitor.
  • offset correction means 61 gain correction means 62, gradation conversion means 8, and image processing means 9 ⁇ , hardware such as ASIC (application-specific integrated circuit), FPGA (field programmable gate array), etc. are used. It can also be configured with software, that is, with a programmed computer.
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the image processing means 9 By configuring the image processing means 9 with a personal computer, high-performance embedded microcomputer, DSP (digital 'signal' processor), etc., statistical computation and pattern recognition are performed, thereby eliminating noise components. And complicated signal processing can be easily realized.
  • DSP digital 'signal' processor
  • the luminance distribution detection means 11 includes a histogram generation unit 111, a maximum level value detection unit 112, and a minimum level value detection unit 113.
  • the maximum level value detection unit 112 and the minimum level value detection unit 113 constitute the maximum / minimum level value detection means 114.
  • the luminance distribution detection means 11 is based on a detection result of a pixel area that is recognized as a subject or a pixel area that is not recognized as a subject, from a subject shape recognition section means 10 (detailed functions will be described later). A luminance distribution of only a pixel region recognized as a subject is detected.
  • the luminance distribution detection means 11 can also store image information of one frame or more. By detecting the luminance distribution for each of a plurality of frames, the detection error of the luminance distribution between the frames can be reduced, and the accuracy of the output signal of the luminance distribution detecting means 11 can be improved.
  • the luminance distribution detection means 11 uses the AZD output Sc for one frame from the pixel data of the region recognized as the subject by the subject shape recognition unit 10 and the signal level of the green signal and the luminance signal (Y).
  • the histogram generation unit 111 generates a luminance signal level (gradation) distribution (brightness histogram) according to the signal level, and the maximum level value detection unit 112 and the minimum level value detection unit 113 generate the luminance histogram. Then, obtain the maximum and minimum level values of the effective luminance signal level of the subject.
  • effective luminance signal level refers to a signal level other than an isolated value (exceptionally large, value, exceptionally small !, value compared to other values) due to pixel defects, noise, and the like.
  • the subject shape recognition unit 10 recognizes the shape pattern of the subject and detects an effective subject area.
  • This subject region means a feature region of the subject (a region corresponding to a finger in the case of fingerprint authentication, a palm or finger in the case of vein authentication, and a face in the case of face authentication).
  • the validity of the detection result of the subject area is determined by a geometric characteristic such as a shape unique to the subject, for example, a fingerprint shape, a vein shape, a relative positional relationship between eyes, nose, and mouth.
  • a geometric characteristic such as a shape unique to the subject, for example, a fingerprint shape, a vein shape, a relative positional relationship between eyes, nose, and mouth.
  • the generation of the luminance histogram and the detection of the maximum and minimum level values of the effective luminance signal level may be performed only within the subject region, or may be performed within the subject region and outside the subject region. However, for the sake of simplicity, the following description will be limited to an example in the subject area only.
  • the subject shape recognition unit 10 recognizes the subject as a subject to the luminance distribution detection unit 11, a signal having a value indicating the recognition, for example, "1"("recognitionsignal” or "in-region signal”) If the signal cannot be recognized, a signal indicating that the signal is not recognized, for example, a signal having “0” (“recognition impossible signal” or “out-of-region signal”) is output.
  • the luminance distribution detecting means 11 can determine whether or not the luminance distribution needs to be detected as described above.
  • the output of the subject shape recognition means 10 may control the correction amount determination means 12. In this case, the correction amount can be set in the correction amount determination unit 12 based on the output of the luminance distribution detection unit 11 and the output of the object shape recognition unit 10.
  • FIG. 4 is a diagram illustrating an example of a histogram generated by the histogram generation unit 111.
  • the horizontal axis shows the gradation, but the gradation from 0 to 1023 of the pixel data Sc is divided into 32 sections, and the average value of the gradation values of each section is represented as a representative value. That is, the histogram generation unit 111 generates a histogram when the frequency is counted by dividing every 32 gradations, that is, 1024 gradations into 32 sections.
  • the number of gradation values per section is 32 as described above.
  • the numbers shown on the horizontal axis in Fig. 4 show the gradation values near the center in the category as representative values.
  • the numerical value “16” on the horizontal axis is a representative value of the division of the gradation value 0 to 3 1 power, and the frequency shown at the position “16” is included in the pixel data Sc of one frame. Represents the number of pixels having gradation values of 0 to 31.
  • each division consists of one gradation value and the number of gradation values of the pixel data Sc
  • the pixel data is 10 bits
  • it is generated for each of 1024 gradation values such as 0 to 1023. You may make it do.
  • the histogram generation unit 111 counts the frequency for each gradation division of the pixel data Sc for one frame, and generates a histogram for one frame as shown in FIG.
  • the maximum level value detection unit 112 detects a gradation division in which the frequency Has accumulated in the lower (lower) division of the brightest (larger) gradation division power of the generated histogram exceeds a preset threshold Cta, and The representative value of the category is output as the maximum level value Sea.
  • the minimum level value detection unit 113 performs the gradation classification in which the frequency Hbs in which the darkest (smallest) gradation classification power of the generated histogram is sequentially accumulated in the higher classification exceeds the preset threshold value Ctb. Is detected and the representative value of that category is output as the minimum level value Scb.
  • the maximum level value Sca and the minimum level value Scb are supplied to the correction amount determination means 12.
  • the correction amount determination means 12 obtains the offset correction amount Kb and the gain correction amount Ka based on the maximum level value Sea and the minimum level value Scb supplied from the luminance distribution detection means 11!
  • the correction method by the offset correction means 61 and the gain correction means 62 will be described, and then the method for obtaining Kb and Ka will be described.
  • the AZD output Sc is stored in the imaging signal memory 5 and then read out from the imaging signal memory 5 and supplied to the correcting means 6.
  • the signal supplied to the correction means 6 is the force represented by the symbol Sd, and its value itself is the same as the AZD output Sc.
  • the offset correction amount Kb and the gain correction amount Ka determined based on the AZD output Sc are read after the AZD output Sc is stored in the imaging signal memory 5 and corrected by the correction means 6. Is used for the correction. Therefore, the following description will be made assuming that the AZD output Sc is input to the correction means 6 as it is for a while.
  • the value of the AZD output Sc and the signal S d read from the imaging signal memory 5 is represented by the symbol X, and the maximum level value Sca and the minimum level value Scb in the one frame are respectively shown.
  • Xa and Xb the output of correction means 6 is expressed by Y
  • the maximum level value of the variable range (the range of possible values) is expressed by Ym.
  • ⁇ and j8 are both set to 0.1, for example.
  • Kb ⁇ o; XXa— (l—j8) XXb ⁇ Z ⁇ a— (l—j8) ⁇ ⁇ ' ⁇ (3)
  • Ka [a XYmX ⁇ a- (l-j8) ⁇ ] / ⁇ «X (Xb—Xa) ⁇
  • Kb ⁇ a XXa- (l- a) XXb ⁇ / ⁇ 2X ⁇ -1) ⁇ ⁇ ' ⁇ (5)
  • Ka ⁇ XYmX (2 ⁇ ⁇ ⁇ 1) ⁇ / ⁇ a X (Xb—Xa) ⁇
  • Kb ⁇ 0. LXXa-O. 9XXb ⁇ / ⁇ 2XO. 1— 1) ⁇
  • Ka ⁇ 0. LXYmX (2X0. 1— 1) ⁇
  • the correction amount determining means 12 obtains the gain correction amount Ka and the offset correction amount Kb from the equations (3) and (4), and uses the gain correction amount Ka and the offset correction amount Kb used in this way as gains. Supply to correction means 7 and offset correction means 61.
  • the AZD output that is, the maximum level value Xa of the input of the correction unit 6 Is set to a value (1 ⁇ ) X Ym that is close to the maximum value Ym of the variable range of the output of the correction means 6, that is, the input of the gradation conversion means 8, and the AZD output, that is, the minimum level value Xb of the input of the correction means 6 is corrected
  • the values other than the minimum level value Xb are values evenly allocated in the range from (1 j8) XYm to (X XYm).
  • the gain correction amount Ka may be set to a value smaller than the value obtained by the equation (4) instead of the value of the equation (4).
  • the values of the offset correction amount Kb and the gain correction amount Ka are prepared in advance based on the range of the minimum level value Xb and the maximum level value Xa without being obtained by the above equations (3) and (4). You may make it require in the table.
  • the offset correction amount Kb from only the minimum level value Xb is a simple method for obtaining the offset correction amount Kb and the gain correction amount Ka of the minimum level value Xb and the maximum level value Xa.
  • the average value of the maximum level value Xa and the minimum level value Xb (intermediate) Value) and the maximum value Ym of the output variable range of the correction means 21 may be used to determine the gain correction amount Ka.
  • Fig. 6 is an example of a table used to determine the gain correction amount Ka from the minimum level value Xb.
  • the correction amount determination unit 12 also controls the tone conversion characteristics of the tone conversion unit 8 based on the tone conversion control signal Kc. An example of the operation for controlling the gradation conversion characteristics by the correction amount determining means 12 will be described with reference to FIGS. 7 (A) and (B) and FIGS. 8 (A) and (B).
  • FIG. 7 (A) shows a one-frame histogram of the output Sc of the AZD conversion means 3, and a low gradation (in the example shown, the gradation is a gradation value whose representative value is 447 or less).
  • the appearance frequency of ⁇ shows high ⁇ characteristics.
  • the gradation characteristic indicated by the broken line TCC1 in FIG. 7B can be used to output with emphasized low gradation contrast.
  • FIG. 8A shows a characteristic in which the appearance frequency of the intermediate gradation (in the example shown, the gradation value category representative values are 447 to 831 gradation) is high.
  • the gradation characteristics realized by the lookup table have several characteristics.
  • the gradation characteristics composed of broken lines are broken.
  • the coordinate value of the point may be changed based on the gradation conversion control signal Kc.
  • FIGS. 9A to 9G are timing charts showing the operation of the imaging apparatus of the first embodiment.
  • VD in FIG. 9 (A) indicates a synchronization signal (vertical synchronization signal) in one frame period (Tf).
  • Fig. 9 (B) shows the output timing of the AZD conversion output Sc
  • Fig. 9 (C) shows the readout timing of the imaging memory output Sd
  • Fig. 9 (D) shows the maximum level Xa and the minimum level.
  • the timing for determining Xb and correction amounts Ka and Kb is shown.
  • Fig. 9 (E) shows the output timing of correction amounts Xa and Xb.
  • FIG. 9 (F) shows the output timing of the signal Sf from the correction means 6.
  • Fig. 9 (G) shows the tone conversion characteristics Ft used.
  • the frame period can be adjusted according to the imaging conditions such as high-speed reading and long exposure.
  • the frame period In the case of high-speed reading, the frame period is set short (high frame rate) and long. In the case of time exposure, a long frame period (low frame rate) is set.
  • Ft (a) shown in FIG. 9 (G) is used as the gradation conversion characteristic in the gradation conversion means 8.
  • the AZD output Scl shown in FIG. 9B is output and written to the imaging signal memory means 5, and in the subsequent second frame period F2, the AZD output Scl shown in FIG. As shown in), it is read out as the imaging signal memory output Sdl.
  • the AZD output Scl is written to the imaging memory means 5 and supplied to the luminance distribution detecting means 11.
  • the luminance distribution detection means 11 receives the AZD output Scl and generates a histogram.
  • the last blanking period BL of the first frame period F1 as shown in FIG. 9 (D), as shown in FIG.
  • the detection of the bell value Xbl is performed, and the result is supplied to the correction amount determination means 12.
  • the correction amount determination means 12 determines the first frame period based on the supplied maximum level value Xal and minimum level value Xbl. In F1, the offset correction amount Kbl and gain correction amount Kal for the data Scl output from the AZD conversion means 3 are determined.
  • the determined offset correction amount Kb 1 and gain correction amount Kal are output from the AZD conversion means 3 in the next second frame period F2 (at that time, the first frame period F1).
  • the same data Sdl as the inputted data Sc 1 is output from the imaging signal memory means 4) and supplied to the offset correction means 61 and the gain correction means 62.
  • FIG. 10 is a flowchart showing a processing procedure of the first embodiment.
  • step S 1! shooting is performed in step S 1! ⁇ , AZ corresponding to the image signal obtained from subject 2
  • step S2 the maximum level value Xa and the minimum level value Xb are detected.
  • step S4 a gain correction amount Ka and an offset correction amount Kb are calculated.
  • the calculation of the gain correction amount Ka and the offset correction amount Kb is performed by the method described above for the correction amount determination means 12.
  • step S5 correction processing is performed using the gain correction amount Ka and the offset correction amount Kb.
  • step S6 gradation conversion is performed.
  • step S3 If it is determined in step S3 that the minimum level value Xb is equal to or smaller than the threshold value Xbt and correction is not necessary, correction is not performed (that is, without being subtracted by the offset correction means 61 (in other words, offset).
  • the gradation conversion (step S6) is performed without increasing the gain by the gain correction means 62 (the gain correction amount Ka is set to “1”) without the correction amount Kb being “0”.
  • step S1 is performed by the solid-state imaging device 1
  • the processes of step S2 and step S3 are performed by the luminance distribution detecting means 11
  • the process of step S4 is the correction amount determining means. 12
  • the processing of step S5 is performed by the correction means 6 and the step
  • the process of S6 is performed by the gradation converting means 8.
  • the AZD output Sc corresponding to the image signal obtained from the subject is as shown by reference numeral Fa in FIG. 11B, and has a high-frequency component that vibrates with a peak of approximately 1023 and a bottom of approximately 623.
  • the image signal Fa shown in FIG. 11B is obtained when the subject is generally bright (when the DC component is large) and there are few low-luminance signals.
  • the output of the gradation converting means 8 has an amplitude of a higher frequency signal component than the input as indicated by reference numeral Ga in FIG. (Here, the amplitude is compared with a value obtained by normalizing the maximum value in the range of values that the input and output of the gradation conversion means 8 can take as 1).
  • the gain conversion is performed only by measuring the offset correction, the signal Fc is formed, and when the gradation conversion is performed by inputting the signal Fc into the gradation conversion unit 8, the gradation conversion unit 8
  • the output of is as shown by Gc in Fig. 11 (C), and the amplitude of the high-frequency signal component is even greater.
  • FIG. 12 (A) shows an outline of the original image of the subject
  • Fig. 12 (B) shows an image signal along the broken line SL in Fig. 12 (A) (a signal obtained by sequentially reading signals from the pixels on the broken line SL.
  • FIG. 12C shows a signal obtained by applying the correction according to the present embodiment to the signal of FIG. 12B.
  • Fig. 12 (A) shows an outline of the original image of the subject
  • Fig. 12 (B) shows an image signal along the broken line SL in Fig. 12 (A) (a signal obtained by sequentially reading signals from the pixels on the broken line SL.
  • FIG. 12C shows a signal obtained by applying the correction according to the present embodiment to the signal of FIG. 12B.
  • the amount of reflected light differs between the ridge and other parts (the amount of scattered light and the amount of transmitted light may be different depending on the configuration of the optical system).
  • the signal difference (contrast) with respect to the fingerprint irregularities is output as the signal amount.
  • the center of the finger is lighter than the edge of the finger.
  • Figures 12 (A) to 12 (C) are examples, and the same effect can be achieved for signals for which contrast is to be enhanced.
  • the overall configuration of the imaging apparatus of the second embodiment is as shown in FIG. 1, but differs in the following points. That is, in Embodiment 1, the maximum level value and the minimum level value of the pixel signal of one frame are obtained, and based on this, the offset correction amount Kb and the gain correction amount Ka for the pixel data of that frame are obtained, and the same frame is obtained. As shown in Fig. 13, the screen is divided into multiple areas or blocks DV1 to DV12, and the maximum level value and the minimum level value are detected separately for each area, and offset correction is performed. The amount Kb and the gain correction amount Ka may be determined and used to correct the pixel data in each area.
  • the luminance distribution detection means 11 has a histogram for each area.
  • the maximum level value Sea and the minimum level value Scb are obtained, and the correction amount determination means 12 determines the offset correction amount Kb and the gain correction amount Ka for each area. So Then, the correction means 6 corrects the pixel data of the pixels in each area using the offset correction amount Kb and the gain correction amount Ka.
  • the other points are the same as in the first embodiment described with reference to FIG.
  • FIG. 14 shows a processing procedure of the second embodiment.
  • FIG. 14 is generally the same as FIG. 10, except that processing for each area (step S8, step S9, step S10, step SI 1) is added.
  • step S8 the number of areas is initialized (set to 0).
  • step S9 the final area force in the screen (the 12th area force in the example shown in FIG. 13) is determined. If NO in step S9, go back to step S2. If YES in step S9, the process proceeds to step S10.
  • step S10 the number of areas is initialized (set to 0).
  • step S11 the final area force in the screen (the 12th area force in the example shown in FIG. 13) is determined. If NO in step S11, the process returns to step S3. If YES in step S11, proceed to step S6.
  • step S8 and step S9 are performed by the luminance distribution detection means 11, and the processing of step S10 and step S11 is performed by the correction amount determination means 12.
  • the overall configuration of the imaging apparatus of the third embodiment is as shown in FIG. 1, but unlike the first and second embodiments, the imaging signal memory means 5 can be obtained by performing correction for each area. It is possible to use a memory with a memory capacity smaller than one frame, for example, a shift register with several stages (about 2 to 8 stages), a line memory (memory for one line of pixel data), and a FIFO.
  • the luminance distribution detection means 11 uses, for example, line data of pixels of several pixels in the horizontal direction and several pixels in the vertical direction around the pixel at an arbitrary position in the imaging screen of the solid-state imaging device as a line. Read out from memory and create a histogram based on it to detect maximum level value Sca and minimum level value Scb.
  • Embodiment 3 is the same as Embodiment 1 and Embodiment 2. However, the same effect as in the first and second embodiments can be obtained. Further, since there is no need to use frame memory, processing speed can be increased and costs can be reduced.
  • FIGS. 15A to 15G are timing charts of the third embodiment.
  • “PS” shown in FIG. 15 (A) indicates a synchronization signal in the basic unit of time of processing, and the luminance distribution is represented by PI, P2, P3,... Within the processing period divided by this synchronization signal. Detection, gain correction amount Ka, offset correction amount Kb determination, etc. are executed.
  • the processing cycle is a period shorter than one frame period with the horizontal transfer clock of the solid-state imaging device 1 as a minimum unit.
  • FIG. 15B shows the output timing of the AZD conversion output Sc
  • FIG. 15C shows the readout timing of the imaging memory output Sd
  • FIG. 15D shows the maximum level Xa
  • Figure 15 (E) shows the timing of output of correction amounts Xa and Xb
  • Fig. 15 (F) shows the signal from correction means 6.
  • Fig. 15 (G) shows the tone conversion characteristics Ft used.
  • the AZD output Scl is written into the imaging memory means 5 and supplied to the luminance distribution detection means 11.
  • the luminance distribution detection means 1 1 receives the AZD output Scl and generates a histogram, and in the last blanking period BL of the first processing cycle F1, as shown in FIG. 15 (D), the maximum level value Xal, minimum The level value Xbl is detected, and the result is supplied to the correction amount determination means 12.
  • the correction amount determination means 12 performs the first processing based on the supplied maximum level value Xal and minimum level value Xbl.
  • the offset correction amount Kbl and gain correction amount Kal for the data Scl output from the AZD conversion means 3 are determined in the period P1.
  • the determined offset correction amount Kb1 and gain correction amount Kal are obtained by performing AZD conversion on the next second processing cycle P2 (at that time, the first processing cycle P1 as shown in FIG. 15E).
  • the same data Sdl (FIG. 15 (C)) as the data Scl output from the means 3 is supplied to the offset correction means 61 and the gain correction means 62 to the image signal memory means 5).
  • These correction amounts As a result of the correction used, the signal Sf shown in FIG.
  • Equation (12) It is represented by Equation (12) is similar to Equation (11), except that i represents the number of the processing period, not the frame number.
  • FIG. 16 is a block diagram showing the configuration of the fourth embodiment of the present invention.
  • the imaging apparatus of the fourth embodiment is generally the same as the imaging apparatus of the first embodiment, but does not use the imaging signal memory means 5 of the first embodiment and does not incorporate the image processing means 9 in the imaging apparatus main body.
  • 2 shows a configuration of an imaging apparatus in which the correction control means 7 and the subject shape recognition means 10 are not built in the imaging apparatus main body.
  • the image processing means 9, the correction control means 7, and the subject shape recognition means 10 are composed of a signal processing device capable of signal processing such as a personal computer and a microcomputer.
  • the imaging apparatus shown in FIG. 16 is further different from that of FIG. 16 in that a reverse gradation conversion unit 13 is provided and a luminance distribution detection unit 14 is provided instead of the luminance distribution detection unit 11 of FIG.
  • the correction amount determining means 12, the inverse gradation converting means 13, and the luminance distribution detecting means 14 constitute a correction control means 7.
  • Inverse gradation conversion means 13 has gradation conversion characteristics opposite to the gradation conversion characteristics of gradation conversion means 8. It is what you have. That is, the product of the conversion characteristic of the gradation conversion means 8 and the conversion characteristic of the inverse gradation conversion characteristic 13 has linearity. Therefore, the output Sg of the reverse gradation converting means 13 is a signal proportional to the imaging output of the solid-state imaging device 1.
  • the luminance distribution detection means 14 receives the output Sh of the reverse gradation conversion means 13 not the output of the AZD conversion means 3, and generates a histogram and detects the maximum level value Sha and the minimum level value Shb.
  • Sha and Shb are used as the maximum level value Xa and the minimum level value Xb described in the first embodiment.
  • the correction amount determination means 12 calculates the gain correction amount Ka and the offset correction amount Kb based on the maximum level value Sha and the minimum level value Shb detected by the luminance distribution detection means 14.
  • the signal Sg obtained by gradation conversion by the gradation conversion means 8 is converted into the original gradation characteristic by the inverse gradation conversion means 13.
  • the luminance distribution detecting means 14 and correction amount determining means 12 perform data processing without considering the gradation conversion characteristics of the gradation converting means 8. It can be carried out.
  • the luminance distribution detecting means 14 controls correction based on the AZD output (luminance distribution detection, ie, histogram generation, detection of maximum level value, minimum level value, and Ka based on this control).
  • the fourth embodiment is different in that the correction is controlled based on the output Sh of the inverse gradation converting means 13 that receives the output Sg of the gradation converting means 8.
  • FIG. 17 is a flowchart showing a processing procedure of the fourth embodiment.
  • FIG. 17 is generally the same as FIG. 10 except that step S12 for performing reverse gradation conversion is added.
  • the process of step S12 is performed by the reverse gradation converting means 12.
  • FIGS. 18A to 18G are timing charts showing the operation of the imaging apparatus of the fourth embodiment.
  • FIGS. 18A to 18G are the same as FIGS. 9A to 9G.
  • FIG. 18 (C) shows the output of the inverse gradation converting means 13.
  • the correction amount Ka (i-1), Kb determined based on the pixel data Sh (i-1) of one frame ((i-1) th frame) F (i-1)! (i 1) is used to correct the pixel data Sci of the next frame (i-th frame) Fi.
  • the correction contents in the correction means 21 are as follows:
  • moving image correction can be realized by performing correction using correction data determined based on pixel data of the previous frame.
  • the pixel data Sh (i—1) of one frame ((i ⁇ 1) th frame) F (i ⁇ l) and the pixel data Sh (of the next frame (ith frame) F (i) Based on both i), the correction amounts Ka (i + 1) and Kb (i + 1) of the next frame ((i + 1) th frame) may be determined.
  • the fourth embodiment can also be configured to be divided into a plurality of areas and processed as described in the second embodiment.
  • FIG. 19 is a block diagram showing the configuration of the imaging apparatus according to the fifth embodiment of the present invention.
  • the image pickup apparatus of the fifth embodiment is generally the same as the image pickup apparatus of the fourth embodiment, but does not use the reverse gradation conversion means 13 of the fourth embodiment, and the gradation conversion means of the fourth embodiment.
  • the gradation conversion means 15 is used instead of 8
  • the correction amount determination means 16 is used instead of the correction amount determination means 12 of Embodiment 4
  • the luminance distribution luminance distribution detection means 14 is replaced with the gradation conversion means 15
  • the difference is that a histogram is generated in response to the output of, and the maximum level value Sgh and the minimum level value Shb are detected.
  • a conversion characteristic control means 17 for controlling the conversion characteristics of the gradation conversion means 15 is provided.
  • the conversion characteristic control means 17 is also controlled to operate in synchronization with the output from the AZD conversion means 3 (by a control means not shown).
  • the conversion control means 17, the luminance distribution detection means 14, and the correction amount determination means 16 constitute the correction control means 7.
  • the conversion characteristic control means 17 receives the gradation conversion control signal Kc from the correction amount determination means 16 and the frame rate.
  • the tone conversion characteristic of the tone conversion means 15 can be switched between linear conversion and non-linear conversion in units of frames by receiving a signal indicating the timing in units of frames.
  • the fifth embodiment is the same as the fourth embodiment, and substantially the same effects as the fourth embodiment are obtained.
  • FIGS. 20A to 20G are timing charts showing the operation of the imaging apparatus of the fifth embodiment.
  • FIGS. 20A to 20G are the same as FIGS. 18A to 18G. However, it differs in the following points.
  • the correction amount determination unit 16 instructs the conversion characteristic control unit 17 so that the gradation conversion unit 15 performs linear conversion.
  • the gradation conversion characteristics when performing linear conversion are expressed as Ft (b).
  • the output of the gradation conversion means 15 is a signal proportional to the imaging output of the solid-state imaging device 1.
  • the luminance distribution detecting means 14 generates a histogram based on the output Sgl of the gradation converting means 15, detects the maximum level value Sgal and the minimum level value Sgbl, and the correction amount determining means 16 outputs from the luminance distribution detecting means 14.
  • the gain correction amount Kal and the offset correction amount Kbl are determined.
  • Sgal and Sgbl are used as the maximum level values Xa and Xb in the description of the first embodiment.
  • the tone conversion means 15 is represented by a non-linear tone conversion characteristic (reference Ft (a)) selected by the conversion characteristic control means 17 based on the output of the correction amount determination means 16. ) To perform tone conversion.
  • the gain correction amount Kal and the offset correction amount Kbl determined based on the data of a certain frame are set only for the next frame (F2) or after the next frame. Used to correct image data of several frames (eg F2, F3).
  • FIG. 21 is a flowchart showing a processing procedure of the fifth embodiment.
  • FIG. 21 is generally the same as FIG. 10 except that step S14 and step S15 are added.
  • step S14 a linear gradation conversion characteristic Ft (b) is set as the gradation conversion characteristic.
  • step S15 a non-linear gradation conversion characteristic Ft (a) is set as the gradation conversion characteristic.
  • the processing of step S14 and step S15 is performed by conversion control means 17.
  • the offset correction amount Kbl and the gain correction amount are obtained based on the data of a certain frame when taking a still image without providing the inverse gradation conversion means 13 of the fourth embodiment.
  • Once Kal has been determined, correction can be performed using the offset correction amount Kbl and gain correction amount Kal over several frames, and gain correction amount Ka and offset correction amount Kb are calculated for each frame. This eliminates the need to perform imaging and shortens the time required for imaging.
  • the luminance distribution detecting means (11, 14) generates a histogram and obtains the minimum level value Xa and the maximum level value Xb.
  • Line, face detection, ear, nose, etc. is higher than the frequency component of the contrast change of the illumination light, so by detecting the amplitude of the low-frequency component and high-frequency component of the AZD output Sc The local maximum level value and the minimum level value within one frame can be obtained.
  • FIG. 22 shows an example of such a luminance distribution detection means 18.
  • FIGS. 23A and 23B are waveform diagrams showing the operation of the luminance distribution means 18 of FIG. Note that the data to be processed is represented by a continuous curve connecting the digital signal values in Fig. 23 (A) and (B), which are digital signals.
  • the luminance distribution detecting means 14 of FIG. 22 is the same as the luminance distribution detecting means of FIG. 16 or FIG. It can be used instead of 14.
  • the luminance distribution detecting means 18 shown in FIG. 22 includes a low frequency component extracting means 19, a high frequency component extracting means 20, an amplitude detecting means 21, a subtractor 22, and an adder 23.
  • the low frequency component extraction means 19 receives the AZD output (indicated by the solid line in Fig. 23 (A)) and extracts the low frequency component (indicated by the dotted line LS in Fig. 23 (A)). This low frequency component can also be viewed as representing the local average of the AZD output Sc.
  • a median filter or epsilon filter can be used in addition to a normal digital LPF.
  • the local average indicates an average of several pixels (for example, horizontal 3 pixels ⁇ vertical 3 pixels) and several hundred pixels (for example, horizontal 30 pixels ⁇ vertical 30 pixels).
  • the cut-off frequency and frequency response of the low-frequency component extraction means 19 it is possible to detect a luminance change in the local region and a global or overall luminance change of the screen. Changes in luminance due to illumination light can be detected as global or overall luminance changes.
  • the cut-off frequency of the low frequency component extraction means 19 the size (area) of the local region can be adjusted, and the luminance change corresponding to the local region size can be detected. In other words, when the illumination light enters the screen as a spot light like a night illumination lamp, or when the whole screen is incident as a uniform illumination like a fluorescent lamp in the room, a change in luminance depending on the situation is detected. can do.
  • the cut-off frequency and frequency response of the low-frequency component extraction means 19 vary from several thousand pixels (for example, horizontal 50 pixels x vertical 50 pixels) to tens of thousands of pixels (for example, horizontal 200 pixels x vertical 200 pixels) Set the frequency so that changes can be detected (pixel pixel x pixel clock).
  • the high frequency component extracting means 20 receives the AZD output Sc (or reverse gradation converting means) and extracts the high frequency component (FIG. 23 (B)).
  • a normal digital HPF can be used as the high-frequency component extraction means 20.
  • the amplitude detection means 21 detects the maximum amplitude (1Z2 of the difference between the peak level value (LP) and the bottom level value (LB)) AM of the local region of the output HS of the high frequency component extraction means 20
  • the highest and lowest values in the local region may be the peak level value and the bottom level value.
  • the highest value force is also the nth (n is a natural number) value.
  • the (nth highest, value) may be the peak level value
  • the lowest and nth value (nth lowest, value) may be the bottom level value.
  • the amplitude detection means 21 includes a histogram generation unit 211, a peak level value detection unit 212, a bottom level value detection unit 213, and a calculation unit 215 as shown in FIG.
  • a histogram is generated for the data in the above-mentioned local region among the data indicating the high-frequency component HS extracted by the component extraction means 20, and in the same manner as described with reference to FIG.
  • the peak level value detecting unit 212 and the bottom level value detecting unit 213 constitute a peak / bottom level value detecting unit 214.
  • the subtracter 22 subtracts the output AM of the amplitude detector 21 from the output LS of the low frequency component extraction means 19.
  • the subtraction result is used as the minimum level value Xb.
  • the adder 23 adds the output LS of the low frequency component extraction means 19 and the output AM of the amplitude detector 21. The addition result is used as the maximum level value Xa.
  • the correction amount determination means 12 includes a horizontal counter and a vertical counter, so that the timing at which the imaging output is supplied to the correction means 6 via the imaging signal memory 5 and the offset correction amount Kb in the correction control means 7 The timing of occurrence of the gain correction amount Ka can be correlated.
  • the luminance distribution detecting means 18 of the sixth embodiment it is possible to correct the local change in average luminance, and the contrast detection result for each area where it is not necessary to specify the area in advance. It is possible to improve the continuity of the image (that is, to prevent a large difference in the contrast detection result between adjacent regions).
  • the signal of the bright area A1 is, for example, a force offset as indicated by reference numeral Fa in FIG.
  • the gradation conversion is performed. By doing so, high frequency components are reproduced with sufficient contrast.
  • the signal in the dark area A2 is subjected to gradation conversion without offset correction (or only gain correction is performed if necessary!), And is reproduced with sufficient contrast.
  • FIG. 26 (C) shows a signal obtained by applying the correction according to the present embodiment to the signal of FIG. 26 (B).
  • a clear signal can be output as shown in 26 (C).
  • Figures 26 (A) to 26 (C) are examples, and the same effect can be achieved for signals for which contrast is to be enhanced.
  • correction means, correction control means, gradation conversion means, and image processing means of the above embodiments can be realized at least partially by software, that is, by a programmed computer. Further, the correction means, the correction control means, the gradation conversion means, and the image processing means according to the embodiment of the present invention have been described above, but the correction, correction control, gradation conversion, Image processing methods are also part of this invention.
  • gradation correction means suitable for detecting a small change in luminance in the details of the subject, such as characters on a document, human facial features, blood vessel patterns, fingerprint patterns, etc.
  • the present invention can be applied to an imaging apparatus having the above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

Gradation converting means (8) has a nonlinear conversion characteristic such that when the input is in a first range, the variation of the output with the variation of the input is small, and when the input is in a second range, the variation of the output with the variation of the input is large. Correction control means (7) performs an offset correction of a signal (Sd) in proportion to the imaging output of a solid-state imaging element (1) when the number of components in the first range of the imaging signal (Sf) before the gradation conversion is large and determines the correction value in such a way that the number of components in the second range increases. With this, the difference in brightness between small areas of the image can be clearly reproduced.

Description

明 細 書  Specification
撮像装置及び撮像装置における P皆調変換方法  Imaging device and P-tone conversion method in imaging device
技術分野  Technical field
[0001] 本発明は、撮像素子により得られた画素毎の信号値を持つ画像データに対して所 定の階調変換処理を施す撮像装置に関り、特に、原稿上の文字、人の顔の特徴、血 管パターン、指紋パターンなど、被写体の細部における小さな輝度の変化を検出す るのに適した階調補正手段を有する撮像装置及び撮像方法に関する。  TECHNICAL FIELD [0001] The present invention relates to an image pickup apparatus that performs a predetermined gradation conversion process on image data having a signal value for each pixel obtained by an image pickup device, and more particularly to a character on a document, a human face. The present invention relates to an image pickup apparatus and an image pickup method having gradation correction means suitable for detecting small changes in luminance in details of a subject, such as features, blood vessel patterns, and fingerprint patterns.
背景技術  Background art
[0002] 従来の撮像装置の階調 (補正)制御として非線形の階調変換特性を複数備え、画 像の輝度 (明度平均の最大値)に応じて階調変換特性を選択することで被写体のダ イナミックレンジ拡大を実現したものがある(例えば、特許文献 1参照)。  [0002] As a gradation (correction) control of a conventional imaging device, a plurality of nonlinear gradation conversion characteristics are provided, and the gradation conversion characteristics are selected according to the luminance of the image (maximum brightness average value). Some have realized an expansion of the dynamic range (see, for example, Patent Document 1).
[0003] 特許文献 1 :特開 2004— 363726号公報 (第 9頁、第 5図)  Patent Document 1: Japanese Patent Application Laid-Open No. 2004-363726 (page 9, FIG. 5)
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0004] し力しながら、撮像により得られる画像の細部の輝度の小さな変化(高周波数の画 像信号として検出される)が、画面全体の輝度の大きな変化 (低周波数の画像信号と して検出される)に重畳されている場合に、撮像装置のダイナミックレンジが低周波数 信号成分に対して最適化されるため、特に、人の顔の特徴、血管パターン、指紋バタ ーンなど、被写体の画像の細部の小さな輝度差を鮮明に再現することができな 、と 言う問題があった。 [0004] However, a small change in the brightness of the details of the image obtained by imaging (detected as a high-frequency image signal) causes a large change in the brightness of the entire screen (as a low-frequency image signal). The dynamic range of the imaging device is optimized for low-frequency signal components when it is superimposed on the image of the subject (especially, such as human facial features, blood vessel patterns, and fingerprint patterns). There was a problem that small brightness differences in image details could not be reproduced clearly.
課題を解決するための手段  Means for solving the problem
[0005] 本発明に係わる撮像装置は、固体撮像素子と、前記固体撮像素子から得られる撮 像出力に比例した撮像信号カゝら輝度分布とその最大レベルと最小レベルを検出する 輝度分布検出手段と、前記撮像信号にオフセット補正又は利得補正の少なくとも一 方を行う補正手段と、前記輝度分布検出手段により検出された輝度分布の最大レべ ルと最小レベルに基づ!/、て、前記補正手段から出力される補正撮像信号の変化範 囲を拡大するように補正量を制御する補正量決定手段とを備えたことを特徴とする。 [0006] また、前記固体撮像素子から得られる撮像出力に比例した撮像信号の特徴量から 画面上の被写体の形状を認識し、被写体領域を示す被写体領域信号を出力する被 写体形状認識手段を更に備え、 [0005] An imaging apparatus according to the present invention includes a solid-state imaging device, a luminance distribution detecting unit that detects a luminance distribution and a maximum level and a minimum level of an imaging signal proportional to an imaging output obtained from the solid-state imaging device. And correction means for performing at least one of offset correction and gain correction on the imaging signal, and the correction based on the maximum level and the minimum level of the luminance distribution detected by the luminance distribution detection means! Correction amount determining means for controlling the correction amount so as to expand the range of change of the corrected imaging signal output from the means. [0006] Further, there is provided a subject shape recognition means for recognizing the shape of the subject on the screen from the feature quantity of the imaging signal proportional to the imaging output obtained from the solid-state imaging device and outputting a subject area signal indicating the subject area. In addition,
前記輝度分布検出手段が、前記被写体形状認識手段から出力される被写体領域 信号に基づいて被写体領域内の輝度分布を検出し、前記補正量決定手段が、前記 輝度分布検出手段により検出された被写体領域内の輝度分布の最大レベルと最小 レベルに基づ 、て、前記補正手段から出力される被写体領域内の補正撮像信号の 変化範囲を拡大するように補正量を制御することを特徴とする。  The luminance distribution detecting means detects the luminance distribution in the subject area based on the subject area signal output from the subject shape recognizing means, and the correction amount determining means is the subject area detected by the luminance distribution detecting means. On the basis of the maximum level and the minimum level of the luminance distribution, the correction amount is controlled so as to expand the change range of the corrected imaging signal in the subject area output from the correction means.
発明の効果  The invention's effect
[0007] 本発明によれば、撮像信号の輝度分布とその最大レベルと最小レベルに基づ!/、て 、撮像信号のオフセット補正、利得補正及び階調変換特性を適応的に制御するよう にしたので、画像の細部の小さな輝度差を鮮明に再現することができる。  [0007] According to the present invention, the offset correction, gain correction, and gradation conversion characteristics of the imaging signal are adaptively controlled based on the luminance distribution of the imaging signal and the maximum and minimum levels. Therefore, a small luminance difference in the details of the image can be reproduced clearly.
また、撮像画像内の被写体の形状を検出し、被写体領域内の撮像信号の輝度分 布とその最大レベルと最小レベルに基づいて、撮像信号のオフセット補正、利得補 正及び階調変換特性を適応的に制御するようにしたので、被写体画像の細部の小さ な輝度差を更に鮮明に再現することができる。  Also, it detects the shape of the subject in the captured image and adapts the offset correction, gain correction, and gradation conversion characteristics of the imaged signal based on the luminance distribution of the imaged signal in the subject area and its maximum and minimum levels. Therefore, the small brightness difference in the details of the subject image can be reproduced more vividly.
図面の簡単な説明  Brief Description of Drawings
[0008] [図 1]この発明の実施の形態 1の撮像装置を示すブロック図である。 FIG. 1 is a block diagram showing an imaging apparatus according to Embodiment 1 of the present invention.
[図 2] (A)〜 (C)は、実施の形態 1の階調変換手段の変換特性の一例を示す線図で ある。  FIG. 2 (A) to (C) are diagrams showing an example of conversion characteristics of the gradation conversion means of the first embodiment.
[図 3]実施の形態 1で用いられる輝度分布検出手段の一例を示すブロック図である。  FIG. 3 is a block diagram showing an example of luminance distribution detection means used in the first embodiment.
[図 4]図 3のヒストグラム生成部が生成したヒストグラムの一例を示す図である。  4 is a diagram showing an example of a histogram generated by the histogram generation unit of FIG.
[図 5]図 1の補正量決定手段によるオフセット補正量及び利得補正量の決定方法を 説明するための図である。  FIG. 5 is a diagram for explaining a method for determining an offset correction amount and a gain correction amount by the correction amount determining means in FIG. 1.
[図 6]図 1の輝度分布検出手段で検出された最小レベル値の値から利得補正量を求 める場合に用いられる表の一例である。  FIG. 6 is an example of a table used when the gain correction amount is obtained from the minimum level value detected by the luminance distribution detecting means in FIG. 1.
[図 7] (A)及び (B)は、実施の形態 1における補正量決定手段による階調変換手段 の制御の一例を示す図である。 圆 8] (A)及び (B)は、実施の形態 1における補正量決定手段による階調変換手段 の制御の一例を示す図である。 FIG. 7 (A) and (B) are diagrams showing an example of control of the gradation converting means by the correction amount determining means in the first embodiment. 8] (A) and (B) are diagrams showing an example of control of the gradation converting means by the correction amount determining means in the first embodiment.
[図 9] (A)〜(G)は、実施の形態 1の撮像装置の動作を示すタイミングチャートである  [FIG. 9] (A) to (G) are timing charts showing the operation of the imaging apparatus of the first embodiment.
[図 10]実施の形態 1の処理手順を示すフローチャートである。 FIG. 10 is a flowchart showing a processing procedure in the first embodiment.
[図 11] (A)〜 (C)は、実施の形態 1による補正が階調変換後の信号に与える影響を 示す図である。  [FIG. 11] (A) to (C) are diagrams showing the influence of correction according to Embodiment 1 on a signal after gradation conversion.
[図 12] (A)〜 (C)は、実施の形態 1による階調補正動作の一例を示す図である。  FIGS. 12A to 12C are diagrams showing an example of the gradation correction operation according to the first embodiment.
[図 13]この発明の実施の形態 2における画面の分割の例を示す図である。 FIG. 13 is a diagram showing an example of screen division in the second embodiment of the present invention.
[図 14]実施の形態 2の処理手順を示すフローチャートである。 FIG. 14 is a flowchart showing a processing procedure in the second embodiment.
[図 15] (A)〜(G)は、この発明の実施の形態 3の動作を示すタイミングチャートである  FIG. 15 (A) to (G) are timing charts showing the operation of the third embodiment of the present invention.
[図 16]この発明の実施の形態 4の撮像装置の構成を示すブロック図である。 FIG. 16 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 4 of the present invention.
[図 17]実施の形態 4の処理手順を示すフローチャートである。  FIG. 17 is a flowchart showing a processing procedure in the fourth embodiment.
[図 18] (A)〜(G)は、実施の形態 4の撮像装置の動作を示すタイミングチャートであ る。  [FIG. 18] (A) to (G) are timing charts showing the operation of the imaging apparatus of the fourth embodiment.
[図 19]この発明の実施の形態 5を示すブロック図である。  FIG. 19 is a block diagram showing Embodiment 5 of the present invention.
[図 20] (A)〜(G)は、実施の形態 5の動作を示すタイミングチャートである。  FIGS. 20A to 20G are timing charts showing the operation of the fifth embodiment.
[図 21]実施の形態 5の処理手順を示すフローチャートである。  FIG. 21 is a flowchart showing a processing procedure in the fifth embodiment.
[図 22]実施の形態 6で用いられる輝度分布手段の一例を示すブロック図である。  FIG. 22 is a block diagram showing an example of luminance distribution means used in the sixth embodiment.
[図 23] (A)及び (B)は、図 22の輝度分布手段 31の動作を示す波形図である。  FIG. 23 (A) and (B) are waveform diagrams showing the operation of the luminance distribution means 31 of FIG.
[図 24]図 22の振幅検出手段の一例を示すブロック図である。  24 is a block diagram showing an example of amplitude detection means in FIG.
[図 25]明るさの異なる領域を含む画面の一例を示す図である。  FIG. 25 is a diagram showing an example of a screen including areas with different brightness.
[図 26] (A)〜 (C)は、実施の形態 6による階調補正動作の一例を示す図である。 符号の説明  FIGS. 26A to 26C are diagrams showing an example of the gradation correction operation according to the sixth embodiment. Explanation of symbols
1 固体撮像素子、 2 増幅手段、 3 AZD変換手段、 4 露出制御手段、 5 撮像信号メモリ手段、 6 補正手段、 61 オフセット補正手段、 62 利得補正 手段、 7 補正制御手段、 8 階調変換手段、 9 画像処理手段、 10 被写体 形状認識手段、 11 輝度分布検出手段、 12 補正量決定手段。 1 solid-state imaging device, 2 amplification means, 3 AZD conversion means, 4 exposure control means, 5 imaging signal memory means, 6 correction means, 61 offset correction means, 62 gain correction means, 7 correction control means, 8 gradation conversion means, 9 Image processing means, 10 subjects Shape recognition means, 11 luminance distribution detection means, 12 correction amount determination means.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0010] 実施の形態 1.  [0010] Embodiment 1.
図 1はこの発明の実施の形態 1の撮像装置の構成を示すブロック図である。実施の 形態 1に係る撮像装置は、固体撮像素子 1と、増幅手段 2と、 AZD変換手段 3と、露 出制御手段 4と、撮像信号メモリ手段 5と、補正手段 6と、補正制御手段 7と、階調変 換手段 8と、画像処理手段 9と、被写体形状認識手段 10とを備えている。  FIG. 1 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 1 of the present invention. The imaging apparatus according to Embodiment 1 includes a solid-state imaging device 1, an amplification unit 2, an AZD conversion unit 3, an exposure control unit 4, an imaging signal memory unit 5, a correction unit 6, and a correction control unit 7. Gradation converting means 8, image processing means 9, and subject shape recognizing means 10.
また、補正手段 6は、オフセット補正手段 61と利得補正手段 62により構成され、補 正制御手段 7は、輝度分布検出手段 11と補正量決定手段 12とで構成されている。  The correction unit 6 includes an offset correction unit 61 and a gain correction unit 62, and the correction control unit 7 includes a luminance distribution detection unit 11 and a correction amount determination unit 12.
[0011] 固体撮像素子 1は、被写体力も入射した光を画素ごとに光電変換して、固体撮像 素子の出力信号(「撮像出力」と呼ぶことがある)を出力する。固体撮像素子 1は、 2次 元的に、即ち、水平方向、垂直方向に配列された複数の画素を構成する光電変換 素子を有し、光電変換素子からは、入射光の光量に応じた大きさのアナログ信号が 出力される。固体撮像素子 1からの信号の出力は、各水平ラインに水平方向に並ん だ画素力 順次出力されるものであり、このような画素ごとの信号を画素信号と呼ぶこ とちある。  [0011] The solid-state imaging device 1 photoelectrically converts light, which has also entered subject power, for each pixel, and outputs an output signal of the solid-state imaging device (sometimes referred to as "imaging output"). The solid-state imaging device 1 has a photoelectric conversion element that forms a plurality of pixels arranged two-dimensionally, that is, in the horizontal direction and the vertical direction, and the photoelectric conversion element has a size corresponding to the amount of incident light. The analog signal is output. The output of the signal from the solid-state imaging device 1 is sequentially output with the pixel forces aligned in the horizontal direction on each horizontal line, and such a signal for each pixel is sometimes called a pixel signal.
[0012] 固体撮像素子 1は、例えば CCD撮像素子であり、その光電荷蓄積時間の制御及 び蓄積された電荷の取り出しのタイミング制御で、シャッター速度及びシャッターの開 閉に相当する制御を行なうことができる。そのため、固体撮像素子 1に対する電荷蓄 積時間の制御を電子シャッター機能と呼ぶことがある、本実施の形態では、露出制 御手段 4により、電荷蓄積時間の制御が行なわれる。  The solid-state imaging device 1 is, for example, a CCD imaging device, and performs control corresponding to shutter speed and shutter opening / closing by controlling the photocharge accumulation time and timing of taking out the accumulated charge. Can do. Therefore, the control of the charge accumulation time for the solid-state imaging device 1 is sometimes called an electronic shutter function. In this embodiment, the exposure control means 4 controls the charge accumulation time.
[0013] 固体撮像素子 1は、白黒センサでも、カラーセンサーでも良い。カラーセンサーとし ては、原色フィルタ配列のものでも、補色フィルタ配列のものでも良い。また、固体撮 像素子 1は、 CMOS撮像素子を用いたものでも良い。  [0013] The solid-state imaging device 1 may be a monochrome sensor or a color sensor. The color sensor may be a primary color filter array or a complementary color filter array. Further, the solid-state imaging device 1 may use a CMOS imaging device.
[0014] 原色フィルタを用いた撮像素子の輝度信号 Yは、  [0014] The luminance signal Y of the image sensor using the primary color filter is
Y=0. 299 XR+0. 587 X G + 0. 114 X B  Y = 0. 299 XR + 0. 587 X G + 0. 114 X B
で得られる。  It is obtained by.
また、補色フィルタを用いた撮像素子の輝度信号は、 Y= (Cy+Mg) + (Ye + G) In addition, the luminance signal of the image sensor using the complementary color filter is Y = (Cy + Mg) + (Ye + G)
または、  Or
Y= (Cy+G) + (Ye + Mg)  Y = (Cy + G) + (Ye + Mg)
で得られる。  It is obtained by.
なお、モノクロの撮像素子においては、 Y信号をそのまま使用する。  In a monochrome image sensor, the Y signal is used as it is.
[0015] 増幅手段 2は、固体撮像素子 1の撮像出力を増幅して撮像出力に比例した撮像信 号を出力する。増幅手段 2の出力を増幅出力と呼ぶことがある。増幅手段 2の増幅利 得は、露出制御手段 4により制御される。 The amplification means 2 amplifies the imaging output of the solid-state imaging device 1 and outputs an imaging signal proportional to the imaging output. The output of the amplifying means 2 may be referred to as an amplified output. The amplification gain of the amplification means 2 is controlled by the exposure control means 4.
[0016] AZD変換手段 3は、増幅手段 2の出力を AZD (アナログ デジタル)変換して、 デジタル信号(「AZD出力」或 、は「撮像データ」と呼ぶことがある) Scを出力する。 デジタル信号は画素ごとの輝度を表すものであり、画素毎のデジタル信号を、画素 データと呼ぶこともある。 AZD出力 Scもまた固体撮像素子 1の撮像出力に比例した 撮像信号である。 [0016] The AZD conversion means 3 performs AZD (analog-digital) conversion on the output of the amplification means 2 and outputs a digital signal (sometimes called "AZD output" or "imaging data") Sc. The digital signal represents the luminance for each pixel, and the digital signal for each pixel is sometimes called pixel data. The AZD output Sc is also an imaging signal proportional to the imaging output of the solid-state imaging device 1.
[0017] 露出制御手段 4は、撮像出力に比例した AZD出力の輝度 (Y)信号、または緑 (G H言号の値に応じて、固体撮像素子 1の露出量及び増幅手段 2の増幅利得を制御す る。  [0017] The exposure control means 4 determines the exposure amount of the solid-state imaging device 1 and the amplification gain of the amplification means 2 according to the luminance (Y) signal of the AZD output proportional to the imaging output or green (the value of the GH symbol). Control.
[0018] 本実施の形態の露出量の制御は、固体撮像素子 1の電荷蓄積時間の制御により 実現されるが、代わりに、被写体を照らす光源の光量の制御や、絞りの開放値を調 整するアイリス制御、或いはこれらの組み合わせにより実現しても良 、。  [0018] The exposure amount control of the present embodiment is realized by controlling the charge accumulation time of the solid-state imaging device 1, but instead, the control of the light amount of the light source that illuminates the subject and the opening value of the aperture are adjusted. It may be realized by iris control or a combination of these.
[0019] 撮像信号メモリ手段 5は、 AZD出力(撮像データ)をフレーム単位で保持するメモリ 容量をもち、 1フレームの撮像信号をメモリに書き込みを行う間に、 1フレーム前に書 き込まれた撮像信号を読み出すことが可能である。撮像信号メモリ手段 5から読み出 された信号 Sdを、「撮像信号メモリ出力」と呼ぶことがある。  [0019] The imaging signal memory means 5 has a memory capacity for holding AZD output (imaging data) in units of frames, and was written one frame before writing one frame of imaging signals to the memory. An imaging signal can be read out. The signal Sd read from the imaging signal memory means 5 may be referred to as “imaging signal memory output”.
[0020] オフセット補正手段 61は、例えば減算器で構成され、補正量決定手段 12からのォ フセット補正量 (Kb)を、撮像信号メモリ 5の出力から減算し、減算した結果を利得補 正手段 62へ出力する。オフセット補正手段 61の出力を「オフセット補正出力」と呼ぶ ことがある。  [0020] The offset correction means 61 is composed of, for example, a subtracter, and subtracts the offset correction amount (Kb) from the correction amount determination means 12 from the output of the imaging signal memory 5, and the gain correction means Output to 62. The output of the offset correction means 61 may be referred to as “offset correction output”.
[0021] 利得補正手段 62は、例えば乗算器で構成され、補正量決定手段 12からの利得補 正量 (Ka)をオフセット補正手段 61の出力に乗算し、乗算した結果を階調変換手段The gain correction means 62 is constituted by a multiplier, for example, and the gain correction from the correction amount determination means 12 is performed. Multiply the positive amount (Ka) by the output of the offset correction means 61, and the multiplication result is the gradation conversion means.
8へ出力する。利得補正手段 62の出力 Sfを「補正撮像信号」と呼ぶことがある。 Output to 8. The output Sf of the gain correction means 62 may be referred to as a “corrected imaging signal”.
[0022] 階調変換手段 8は、補正撮像信号 Sfを階調変換し出力する。階調変換手段 8の出 力 Sgを「階調変換出力」と呼ぶことがある。 The gradation converting means 8 performs gradation conversion on the corrected imaging signal Sf and outputs it. The output Sg of the gradation conversion means 8 is sometimes called “gradation conversion output”.
図 2 (A)は、階調変換手段 8の階調変換特性の一例を示す。横軸は入力(「階調変 換入力」と呼ぶ)であり、縦軸は出力(「階調変換出力」と呼ぶ)である。図示の例では FIG. 2A shows an example of the gradation conversion characteristics of the gradation converting means 8. The horizontal axis is input (referred to as “gradation conversion input”), and the vertical axis is output (referred to as “gradation conversion output”). In the example shown
、入力は 10ビットの数値であり、最小が 0であり、最大が 1023であり、出力は 8ビット の数値であり、最小が 0であり、最大が 255である。 The input is a 10-bit number, the minimum is 0, the maximum is 1023, and the output is an 8-bit number, the minimum is 0, and the maximum is 255.
[0023] 図 2 (A)の階調変換特性横軸の下方の図 2 (B)に入力信号 Sfの一例が示され、そ のような入力信号に対する出力信号 Sgが、階調変換特性曲線の右方の図 2 (C)に 示されている。 [0023] Fig. 2 (B) below the horizontal axis of the gradation conversion characteristics in Fig. 2 (A) shows an example of the input signal Sf, and the output signal Sg for such an input signal is represented by a gradation conversion characteristic curve. It is shown in Fig. 2 (C) on the right side of.
[0024] 図示の階調変換特性は、入力が小さい部分 (乃至範囲)では、入力の増加に対す る出力の増加が比較的大きく(特性曲線の傾きが比較的急であり)、入力が大きくな るに連れて、次第に特性曲線の傾きが緩やかになるものである。図示の例ではまた、 入力が最大値の略 1Z2以下では、入力に対する出力の変化が直線的である。この ような特性は、比較的輝度の低い部分を比較的明るぐかつ高いコントラストで再現 するとともに、比較的輝度の高い部分も階調再現でき、ダイナミックレンジの広い画像 を得ることができると言う利点がある。  In the gradation conversion characteristics shown in the figure, in the portion where the input is small (or range), the increase in output relative to the increase in input is relatively large (the slope of the characteristic curve is relatively steep), and the input is large. As a result, the slope of the characteristic curve gradually becomes gentler. In the example shown in the figure, the change of the output with respect to the input is linear when the input is less than the maximum value of approximately 1Z2. This characteristic is advantageous in that a relatively low-brightness part can be reproduced with relatively bright and high contrast, and a relatively high-brightness part can be reproduced with gradation, resulting in an image with a wide dynamic range. There is.
[0025] なお、階調変換特性は、 γ特性、折れ線特性などを用いることができ、モニターの 特性、被写体の状態、照明光の状態など種々の撮像条件を考慮して決められるもの であるが、本発明は、入力の範囲のうちの一部について他の部分よりも特性曲線の 傾きを急にして信号の変化を強調した 、場合、即ち特性極性の傾きが比較的緩やか な第 1の部分 (範囲)と、特性曲線の傾きが比較的急な第 2の部分 (範囲)とを有する 場合に適用できる。  [0025] It should be noted that the tone conversion characteristics can be γ characteristics, polygonal line characteristics, and the like, and are determined in consideration of various imaging conditions such as monitor characteristics, subject conditions, illumination light conditions, and the like. The present invention emphasizes the change in the signal by making the slope of the characteristic curve steeper than that of the other parts of the input range, that is, the first part where the slope of the characteristic polarity is relatively gentle. This can be applied to the case where it has (range) and the second part (range) where the slope of the characteristic curve is relatively steep.
[0026] 階調変換手段 8は、例えばルックアップテーブル (LUT)により実現することができ る。また、変換特性曲線が折れ線で表されるものである場合、比較的簡単な演算によ り実現することが可能であり、ハードウェアロジックで実現することも可能である。  The gradation conversion means 8 can be realized by a look-up table (LUT), for example. Further, when the conversion characteristic curve is represented by a broken line, it can be realized by a relatively simple calculation, and can also be realized by hardware logic.
[0027] 補正量決定手段 8の出力 Kcは、階調変換手段 8の階調変換特性を切替えるため の切替信号である。例えば、階調変換手段 8内に、複数の階調変換特性を保持でき るハードウェア構成の場合、高速にフレーム単位で階調変換特性を切替えることが 可能である。また、階調変換手段 8の階調変換特性を折れ線で構成する場合は、補 正量決定手段 8の出力 Kcは、折れ線特性を実現するための、折れ点位置の座標値 (入力値及び出力値)などをフレーム単位で、階調変換手段 8に供給するものであつ ても良い。この場合、階調変換手段 8に複数の階調変換特性を実現するための記憶 領域を持つ必要が無ぐゲート規模、メモリ容量を削減し得る効果がある。 [0027] The output Kc of the correction amount determination means 8 is used to switch the gradation conversion characteristics of the gradation conversion means 8. Switching signal. For example, in the case of a hardware configuration in which the gradation conversion means 8 can hold a plurality of gradation conversion characteristics, it is possible to switch the gradation conversion characteristics in units of frames at high speed. When the gradation conversion characteristic of the gradation converting means 8 is configured by a polygonal line, the output Kc of the correction amount determining means 8 is the coordinate value (input value and output) of the broken point position for realizing the polygonal line characteristic. Value) etc. may be supplied to the gradation converting means 8 in units of frames. In this case, there is an effect that the gate scale and the memory capacity can be reduced because the gradation converting means 8 does not need to have a storage area for realizing a plurality of gradation conversion characteristics.
[0028] 画像処理手段 9は、ホワイトバランス制御、べィヤー配列やノヽ-カム配列の固体撮 像素子 1の出力力 各画素の RGB信号を生成するための補間信号処理、ノイズ除 去、 RGB信号から YCbCr信号に変換する処理、モニターへ表示するための画質改 善の処理など一般的なカメラの信号処理を実現するための機能を有する。  [0028] The image processing means 9 includes white balance control, output power of the solid-state image pickup device 1 having a Bayer array or a No-cam array, interpolation signal processing for generating an RGB signal of each pixel, noise removal, RGB signal It has functions to realize general camera signal processing, such as conversion from YCbCr signals to YCbCr signals and image quality improvement processing for display on a monitor.
また、文字認識、顔 '静脈'指紋の認識などの画像処理を実現する。  It also realizes image processing such as character recognition and facial 'vein' fingerprint recognition.
[0029] なお、オフセット補正手段 61、利得補正手段 62、階調変換手段 8、及び画像処理 手段 9ίま、 ASIC (application— specific integrated circuit)、 FPGA (field pr ogrammable gate array)などハードウェアを用いて構成することもでき、ソフトゥ エアで、即ちプログラムされたコンピュータで構成することもできる。  [0029] Note that offset correction means 61, gain correction means 62, gradation conversion means 8, and image processing means 9ί, hardware such as ASIC (application-specific integrated circuit), FPGA (field programmable gate array), etc. are used. It can also be configured with software, that is, with a programmed computer.
[0030] 画像処理手段 9を、パーソナルコンピュータ、高性能の組み込みマイコン、 DSP (デ ジタル'シグナル 'プロセッサ)等で構成することで、統計系的演算、パターン認識を 行い、これにより、ノイズ成分除去や複雑な信号処理を容易に実現することができる。  [0030] By configuring the image processing means 9 with a personal computer, high-performance embedded microcomputer, DSP (digital 'signal' processor), etc., statistical computation and pattern recognition are performed, thereby eliminating noise components. And complicated signal processing can be easily realized.
[0031] 輝度分布検出手段 11は、図 3に示すように、ヒストグラム生成部 111、最大レベル 値検出部 112、及び最小レベル値検出部 113を備える。最大レベル値検出部 112と 最小レベル値検出部 113とで最大 ·最小レベル値検出手段 114が構成されて ヽる。  As shown in FIG. 3, the luminance distribution detection means 11 includes a histogram generation unit 111, a maximum level value detection unit 112, and a minimum level value detection unit 113. The maximum level value detection unit 112 and the minimum level value detection unit 113 constitute the maximum / minimum level value detection means 114.
[0032] 輝度分布検出手段 11は、被写体形状認識部手段 10 (詳細な機能は後述する)か ら、被写体と認識される画素領域か、被写体と認識されない画素領域の検出結果を もとに、被写体と認識される画素領域のみの輝度分布を検出する。  [0032] The luminance distribution detection means 11 is based on a detection result of a pixel area that is recognized as a subject or a pixel area that is not recognized as a subject, from a subject shape recognition section means 10 (detailed functions will be described later). A luminance distribution of only a pixel region recognized as a subject is detected.
被写体ではない画素領域の信号レベルが、被写体の信号レベルと異なる場合、例 えば被写体の信号レベルに比べて低 ヽ或 、は高 、信号である場合に、被写体の信 号レベルを有効に検出することが可能となり、検出精度が向上する。 [0033] なお、輝度分布検出手段 11は、 1フレーム以上の画像情報を記憶することも可能 である。複数のフレームごとの輝度分布を検出することで、フレーム間の輝度分布の 検出誤差を小さくすることができ、輝度分布検出手段 11の出力信号の精度を向上す ることがでさる。 When the signal level of the pixel area that is not the subject is different from the signal level of the subject, for example, when the signal level is lower or higher than the signal level of the subject, the signal level of the subject is detected effectively. And detection accuracy is improved. Note that the luminance distribution detection means 11 can also store image information of one frame or more. By detecting the luminance distribution for each of a plurality of frames, the detection error of the luminance distribution between the frames can be reduced, and the accuracy of the output signal of the luminance distribution detecting means 11 can be improved.
[0034] 輝度分布検出手段 11は、 1フレーム分の AZD出力 Scから、被写体形状認識部 1 0により被写体と認識された領域の画素データから緑信号の信号レベルや、輝度信 号 (Y)の信号レベルにっ 、て、ヒストグラム生成部 111にお 、て輝度信号レベル(階 調)の分布 (輝度ヒストグラム)を生成し、最大レベル値検出部 112、及び最小レベル 値検出部 113によりその輝度ヒストグラムカゝら被写体の有効輝度信号レベルの最大レ ベル値及び最小レベル値を求める。  [0034] The luminance distribution detection means 11 uses the AZD output Sc for one frame from the pixel data of the region recognized as the subject by the subject shape recognition unit 10 and the signal level of the green signal and the luminance signal (Y). The histogram generation unit 111 generates a luminance signal level (gradation) distribution (brightness histogram) according to the signal level, and the maximum level value detection unit 112 and the minimum level value detection unit 113 generate the luminance histogram. Then, obtain the maximum and minimum level values of the effective luminance signal level of the subject.
ここで、「有効輝度信号レベルとは、画素欠陥、ノイズなどによる孤立した値 (他の値 に比べて例外的に大き 、値、例外的に小さ!、値)以外の信号レベルを言う。  Here, “effective luminance signal level” refers to a signal level other than an isolated value (exceptionally large, value, exceptionally small !, value compared to other values) due to pixel defects, noise, and the like.
[0035] 被写体形状認識部 10は、被写体の形状パターンを認識し、有効な被写体領域を 検出する。この被写体領域は、被写体の特徴領域 (指紋認証の場合は指、静脈認証 の場合は掌や指、顔認証の場合は顔に対応する領域)を意味する。  The subject shape recognition unit 10 recognizes the shape pattern of the subject and detects an effective subject area. This subject region means a feature region of the subject (a region corresponding to a finger in the case of fingerprint authentication, a palm or finger in the case of vein authentication, and a face in the case of face authentication).
[0036] 前記被写体領域の検出結果の有効性の判定については、その被写体固有の形状 、例えば、指紋形状、静脈形状、目、鼻、口の相対位置関係などの幾何学的特徴に より判定する方法や、色や輝度変化パターンの特徴により判定する方法があるが詳 細は省略する。  [0036] The validity of the detection result of the subject area is determined by a geometric characteristic such as a shape unique to the subject, for example, a fingerprint shape, a vein shape, a relative positional relationship between eyes, nose, and mouth. There are a method and a method of judging based on the characteristics of the color and luminance change pattern, but details are omitted.
[0037] なお、輝度ヒストグラムの生成及び有効輝度信号レベルの最大レベル値及び最小 レベル値の検出は被写体領域内についてのみ行っても良いし、被写体領域内と被 写体領域外のそれぞれについて行っても良いが、以下簡単のために、被写体領域 内のみの例に限定して説明する。  [0037] It should be noted that the generation of the luminance histogram and the detection of the maximum and minimum level values of the effective luminance signal level may be performed only within the subject region, or may be performed within the subject region and outside the subject region. However, for the sake of simplicity, the following description will be limited to an example in the subject area only.
[0038] 被写体形状認識手段 10は、輝度分布検出手段 11へ、被写体と認識した場合は、 認識したことを示す値、例えば「1」を有する信号(「認識信号」或いは「領域内信号」 ) を出力し、認識できない場合は、認識しないことを示す値、例えば「0」を有する信号( 「認識不可信号」或いは「領域外信号」)を出力する。輝度分布検出手段 11は、前述 のように、輝度分布の検出要否を判断することが可能となる。 なお、被写体形状認識手段 10の出力が、補正量決定手段 12を制御する構成でも 良い。この場合は、補正量決定手段 12内で、輝度分布検出手段 11の出力と、被写 体形状認識手段 10の出力をもとに、補正量を設定することが可能となる。 [0038] When the subject shape recognition unit 10 recognizes the subject as a subject to the luminance distribution detection unit 11, a signal having a value indicating the recognition, for example, "1"("recognitionsignal" or "in-region signal") If the signal cannot be recognized, a signal indicating that the signal is not recognized, for example, a signal having “0” (“recognition impossible signal” or “out-of-region signal”) is output. The luminance distribution detecting means 11 can determine whether or not the luminance distribution needs to be detected as described above. Note that the output of the subject shape recognition means 10 may control the correction amount determination means 12. In this case, the correction amount can be set in the correction amount determination unit 12 based on the output of the luminance distribution detection unit 11 and the output of the object shape recognition unit 10.
[0039] 図 4は、ヒストグラム生成部 111が生成したヒストグラムの一例を示す図である。横軸 は階調を示すが、画素データ Scの 0から 1023の階調を 32個の区分に分割し、各区 分の階調値の平均値を代表値として表している。即ち、ヒストグラム生成部 111は、 3 2階調ごと、すなわち 1024階調分を 32個の区分に分割して度数を計数した場合のヒ ストグラムを生成している。 FIG. 4 is a diagram illustrating an example of a histogram generated by the histogram generation unit 111. The horizontal axis shows the gradation, but the gradation from 0 to 1023 of the pixel data Sc is divided into 32 sections, and the average value of the gradation values of each section is represented as a representative value. That is, the histogram generation unit 111 generates a histogram when the frequency is counted by dividing every 32 gradations, that is, 1024 gradations into 32 sections.
[0040] このように 1024階調を 32個の区分に分割した場合、 1区分あたりの階調値の数は 上記のように、 32になる。図 4の横軸に示された数字は、その区分内の中心付近の 階調値を代表値として示したものである。例えば、横軸の数値「16」は、階調値 0〜3 1力 成る区分の代表値であり、「16」の位置に示された度数は、 1フレームの画素デ ータ Scに含まれる、階調値 0〜31の画素の数を表す。 [0040] As described above, when 1024 gradations are divided into 32 sections, the number of gradation values per section is 32 as described above. The numbers shown on the horizontal axis in Fig. 4 show the gradation values near the center in the category as representative values. For example, the numerical value “16” on the horizontal axis is a representative value of the division of the gradation value 0 to 3 1 power, and the frequency shown at the position “16” is included in the pixel data Sc of one frame. Represents the number of pixels having gradation values of 0 to 31.
なお、各区分が 1個の階調値から成り、画素データ Scの階調値の数、例えば画素 データが 10ビットの場合は、 0から 1023のように 1024個の階調値の各々について 生成するようにしても良い。  In addition, when each division consists of one gradation value and the number of gradation values of the pixel data Sc, for example, the pixel data is 10 bits, it is generated for each of 1024 gradation values such as 0 to 1023. You may make it do.
[0041] ヒストグラム生成部 111は、 1フレーム分の画素データ Scの階調区分ごとの度数を 計数し、図 4に示すような 1フレーム分のヒストグラムを生成する。 [0041] The histogram generation unit 111 counts the frequency for each gradation division of the pixel data Sc for one frame, and generates a histogram for one frame as shown in FIG.
最大レベル値検出部 112は、生成されたヒストグラムの最も明るい(大きい)階調区 分力も順次より低い区分に累積した度数 Hasが、あらかじめ設定された閾値 Ctaを超 える階調区分を検出し、その区分の代表値を最大レベル値 Seaとして出力する。 同様にして、最小レベル値検出部 113は、生成されたヒストグラムの最も暗い(小さ い)階調区分力も順次より高い区分に累積した度数 Hbsが、あらかじめ設定された閾 値 Ctbを超える階調区分を検出し、その区分の代表値を最小レベル値 Scbとして出 力する。  The maximum level value detection unit 112 detects a gradation division in which the frequency Has accumulated in the lower (lower) division of the brightest (larger) gradation division power of the generated histogram exceeds a preset threshold Cta, and The representative value of the category is output as the maximum level value Sea. In the same manner, the minimum level value detection unit 113 performs the gradation classification in which the frequency Hbs in which the darkest (smallest) gradation classification power of the generated histogram is sequentially accumulated in the higher classification exceeds the preset threshold value Ctb. Is detected and the representative value of that category is output as the minimum level value Scb.
[0042] 以上のように生成されたヒストグラムに対して、階調値が低い区分力 順に度数を累 積することで、暗い側の累積度数 Hbsが得られ、同様に階調値が高い区分力 順に 度数を累積することで明る 、側の累積度数 Hasを得ることができる。累積度数 Hbsが 、あら力じめ設定された閾値 Ctbを超えたところを最小レベル値 Scb (図 4の場合 48) とし、累積度数 Hasが設定値 Ctaを超えたところを最大レベル値 Sea (図 4の場合 84 8)として出力する。 [0042] By accumulating the frequencies in the descending order of the gradation values in the histogram generated as described above, the cumulative frequency Hbs on the dark side is obtained, and similarly, the classification power having the high gradation values is obtained. By accumulating the frequencies in order, it is possible to obtain the brightness and the accumulated frequency Has on the side. Cumulative frequency Hbs If the threshold value Ctb exceeds the preset threshold value Ctb, the minimum level value Scb (48 in Fig. 4) is set, and if the cumulative frequency Has exceeds the set value Cta, the maximum level value Sea (in case of Fig. 4 84) Output as 8).
[0043] なお、ヒストグラムを用いずに、被写体と形状認識された画像領域の輝度信号レべ ルから輝度の最大値と輝度の最小値を検出する構成することにより、ハードウェアの ゲート規模削減やソフト処理時間の短縮が図ることが可能となる。  [0043] It should be noted that the configuration of detecting the maximum luminance value and the minimum luminance value from the luminance signal level of the image area whose shape has been recognized as a subject without using a histogram reduces the hardware gate scale. It is possible to shorten the software processing time.
[0044] これらの最大レベル値 Sca、及び最小レベル値 Scbは、補正量決定手段 12に供給 される。  The maximum level value Sca and the minimum level value Scb are supplied to the correction amount determination means 12.
[0045] 補正量決定手段 12は、輝度分布検出手段 11から供給される最大レベル値 Sea及 び最小レベル値 Scbに基づ!/、て、オフセット補正量 Kb及び利得補正量 Kaを求める 以下、オフセット補正手段 61及び利得補正手段 62による補正の仕方を説明し、そ の後で Kb, Kaの求め方を説明する。  [0045] The correction amount determination means 12 obtains the offset correction amount Kb and the gain correction amount Ka based on the maximum level value Sea and the minimum level value Scb supplied from the luminance distribution detection means 11! The correction method by the offset correction means 61 and the gain correction means 62 will be described, and then the method for obtaining Kb and Ka will be described.
[0046] 仮に輝度分布検出手段 11で求めた最大レベル値 Sea及び最小レベル値 Scbを図 5に示すごとくであるとする。図 1の回路では、 AZD出力 Scがー且撮像信号メモリ 5 に記憶された後、撮像信号メモリ 5から読み出されて補正手段 6に供給される。補正 手段 6に供給される信号は符号 Sdで表されている力 その値自体は AZD出力 Scと 変わらない。また、後述のように、 AZD出力 Scに基づいて定めたオフセット補正量 K b,利得補正量 Kaは、当該 AZD出力 Scが撮像信号メモリ 5に記憶された後読み出 されて補正手段 6で補正されるときに、その補正に用いられる。そこで、以下暫くの間 、AZD出力 Scがそのまま補正手段 6に入力されるものとして説明する。  It is assumed that the maximum level value Sea and the minimum level value Scb obtained by the luminance distribution detection means 11 are as shown in FIG. In the circuit of FIG. 1, the AZD output Sc is stored in the imaging signal memory 5 and then read out from the imaging signal memory 5 and supplied to the correcting means 6. The signal supplied to the correction means 6 is the force represented by the symbol Sd, and its value itself is the same as the AZD output Sc. Further, as will be described later, the offset correction amount Kb and the gain correction amount Ka determined based on the AZD output Sc are read after the AZD output Sc is stored in the imaging signal memory 5 and corrected by the correction means 6. Is used for the correction. Therefore, the following description will be made assuming that the AZD output Sc is input to the correction means 6 as it is for a while.
[0047] また、図 5においては、 AZD出力 Sc及び撮像信号メモリ 5から読み出された信号 S dの値を符号 Xで表し、その 1フレーム内の最大レベル値 Sca、最小レベル値 Scbを それぞれ Xa、 Xbで表し、さらに便宜上補正手段 6の出力を Y、その可変範囲(とり得 る値の範囲)の最大レベル値を Ymで表して!/、る。  Further, in FIG. 5, the value of the AZD output Sc and the signal S d read from the imaging signal memory 5 is represented by the symbol X, and the maximum level value Sca and the minimum level value Scb in the one frame are respectively shown. Expressed by Xa and Xb, and for convenience, the output of correction means 6 is expressed by Y, and the maximum level value of the variable range (the range of possible values) is expressed by Ym.
[0048] 補正手段 6の働きは、その入力 Xが最小レベル値 Xbのときに、補正撮像信号 (補正 手段 6の出力) Y( = Sf)の値が α XYmであり、補正手段 6の入力 Xが最大レベル値 Xaのときに、補正撮像信号 Yの値が(1— j8 ) XYmであるようにするものである。 ここで、 α及び j8はともに例えば 0. 1に定められる。 [0048] The function of the correction means 6 is that when the input X is the minimum level value Xb, the value of the corrected imaging signal (output of the correction means 6) Y (= Sf) is α XYm and the input of the correction means 6 When X is the maximum level value Xa, the value of the corrected imaging signal Y is (1−j8) XYm. Here, α and j8 are both set to 0.1, for example.
このようにすれば、補正手段 6の入力 X(即ち AZD出力 Sc)の平均値がどのような 値であれ、かつ最大レベル値 Xaと最小レベル値 Xbの差が小さい場合にも、最大レ ベル値 Sea及び最小レベル値 Scbが、(1 j8) XYm、 a XYmに変換され、輝度の 変化を a XYm力 (1— ) XYmの範囲に拡大した後で、階調変換手段 8による階 調変換を行うことができる。  In this way, whatever the average value of the input X (i.e., AZD output Sc) of the correction means 6 is, and the difference between the maximum level value Xa and the minimum level value Xb is small, the maximum level is reached. The value Sea and the minimum level value Scb are converted to (1 j8) XYm, a XYm, and the change in luminance is expanded to the range of a XYm force (1—) XYm. It can be performed.
[0049] 但し、本実施の形態では、最小レベル値 Xb ( = Scb)が所定の閾値 Xbt (例 400)よ りも大きいときに限り、補正を行うこととし、最小レベル値 Xb( = Scb)が所定の閾値 X bt (例 400)以下のときは補正を行わない (Kb = 0、 Ka=lとする)こととしている。 なお、最小レベル値 Xbが所定の閾値 Xbt (例 400)以下のときも、最小レベル値 Xb が閾値 Xb りも大き 、ときと同じように補正を行うようにしても良!、。 [0049] However, in the present embodiment, correction is performed only when the minimum level value Xb (= Scb) is larger than a predetermined threshold value Xbt (eg, 400), and the minimum level value Xb (= Scb) Is not corrected (set Kb = 0, Ka = l) when is below a predetermined threshold value X bt (eg 400). Even when the minimum level value Xb is equal to or less than a predetermined threshold value Xbt (eg 400), the minimum level value Xb is larger than the threshold value Xb, and correction may be performed in the same manner as that.
[0050] Xa、 Xb、 Ymの間には以下の関係がある。 [0050] The following relationship exists between Xa, Xb, and Ym.
(Xa-Kb) XKa=(l- j8) XYm …ひ)  (Xa-Kb) XKa = (l- j8) XYm… hi)
(Xb-Kb) XKa= a XYm ·'·(2)  (Xb-Kb) XKa = a XYm
式(1)及び (2)を連立で解くと、  Solving equations (1) and (2) as simultaneous,
Kb={o; XXa—(l— j8) XXb}Z{a— (l— j8)} ·'·(3)  Kb = {o; XXa— (l—j8) XXb} Z {a— (l—j8)} · '· (3)
Ka= [a XYmX{a-(l-j8 )}]/{« X (Xb—Xa) }  Ka = [a XYmX {a- (l-j8)}] / {«X (Xb—Xa)}
…(  … (
a = βの場合には、  If a = β,
Kb={ a XXa-(l- a) XXb}/{2X α -1)} ·'·(5)  Kb = {a XXa- (l- a) XXb} / {2X α -1)} · '· (5)
Ka={ α XYmX (2Χ α~1)}/{ a X (Xb—Xa)}  Ka = {α XYmX (2Χ α ~ 1)} / {a X (Xb—Xa)}
ー(6)  ー (6)
[0051] なお、  [0051] In addition,
a = j8 =0. 1の場合には、  If a = j8 = 0.1, then
Kb={0. lXXa-O. 9XXb}/{2XO. 1— 1)}  Kb = {0. LXXa-O. 9XXb} / {2XO. 1— 1)}
= {0. lXXa-O. 9XXb}/(-0.8) ---(7)  = {0. lXXa-O. 9XXb} / (-0.8) --- (7)
Ka={0. lXYmX (2X0. 1— 1)}  Ka = {0. LXYmX (2X0. 1— 1)}
/{(-O. l) X (Xb-Xa)} = {—0. 08 XYm}/{0. I X (Xb-Xa) } / {(-O. L) X (Xb-Xa)} = {—0. 08 XYm} / {0. IX (Xb-Xa)}
 ...
仮に Xm=Ymとし、  Suppose Xm = Ym,
Xa = 0. 8 XXm、Xb = 0. 4 XXmすると、  Xa = 0.8 XXm, Xb = 0.4 XXm
Kb = 0. 35 XXm "- (9)  Kb = 0.35 XXm "-(9)
Ka= 2 - -- (10)  Ka = 2--(10)
[0052] 補正量決定手段 12は、式(3)、(4)により、利得補正量 Ka、オフセット補正量 Kbを 求め、このようにして用いられた利得補正量 Ka、オフセット補正量 Kbを利得補正手 段 7及びオフセット補正手段 61に供給する。  [0052] The correction amount determining means 12 obtains the gain correction amount Ka and the offset correction amount Kb from the equations (3) and (4), and uses the gain correction amount Ka and the offset correction amount Kb used in this way as gains. Supply to correction means 7 and offset correction means 61.
オフセット補正手段 61では、撮像信号メモリ 5の出力からオフセット補正量 Kbを減 算し、利得補正手段 62では、オフセット補正手段 61の出力 Seに利得補正量 Kaを乗 算して、利得補正出力 Y( = Sf)を出力する。  The offset correction unit 61 subtracts the offset correction amount Kb from the output of the imaging signal memory 5, and the gain correction unit 62 multiplies the output Se of the offset correction unit 61 by the gain correction amount Ka to obtain the gain correction output Y. (= Sf) is output.
[0053] 以上のように、オフセット補正手段 61におけるオフセット補正量 Kbの減算及び利得 補正手段 62における利得補正量 Kaの乗算による補正は、 AZD出力、即ち補正手 段 6の入力の最大レベル値 Xaを、補正手段 6の出力、即ち階調変換手段 8の入力の 可変範囲の最大値 Ymに近い値(1 β ) X Ymにし、 AZD出力、即ち補正手段 6の 入力の最小レベル値 Xbを補正手段 6の出力、即ち階調変換手段 8の入力の可変範 囲の最小値 (0)に近い値( α X Ym)にし、 AZD変換の出力、即ち補正手段 6の入 力の最大レベル値 Xa及び最小レベル値 Xb以外の値は、それぞれ(1 j8 ) XYmか ら (X XYmまでの範囲に均等に割り振った値とするものである。  [0053] As described above, subtraction of the offset correction amount Kb in the offset correction unit 61 and correction by multiplication of the gain correction amount Ka in the gain correction unit 62 are the AZD output, that is, the maximum level value Xa of the input of the correction unit 6 Is set to a value (1 β) X Ym that is close to the maximum value Ym of the variable range of the output of the correction means 6, that is, the input of the gradation conversion means 8, and the AZD output, that is, the minimum level value Xb of the input of the correction means 6 is corrected The output of the means 6, ie, the value (α X Ym) close to the minimum value (0) of the variable range of the input of the gradation conversion means 8, and the output of the AZD conversion, ie the maximum level value of the input of the correction means 6 Xa The values other than the minimum level value Xb are values evenly allocated in the range from (1 j8) XYm to (X XYm).
[0054] なお、利得補正量 Kaを式 (4)の値とする代わりに式 (4)で求まる値よりも小さな値と しても良い。  Note that the gain correction amount Ka may be set to a value smaller than the value obtained by the equation (4) instead of the value of the equation (4).
[0055] なおまた、オフセット補正量 Kb、利得補正量 Kaの値は、上記の式(3)、(4)で求め ず、最小レベル値 Xb,最大レベル値 Xaの値の範囲に基づき予め用意した表で求め るようにしても良い。  Further, the values of the offset correction amount Kb and the gain correction amount Ka are prepared in advance based on the range of the minimum level value Xb and the maximum level value Xa without being obtained by the above equations (3) and (4). You may make it require in the table.
さらにまた、上記の例では、最小レベル値 Xb,最大レベル値 Xaの双方力 オフセ ット補正量 Kb,利得補正量 Kaを求めている力 簡便法として、最小レベル値 Xbのみ からオフセット補正量 Kbを求め、最大レベル値 Xaと最小レベル値 Xbの平均値(中間 値)と補正手段 21の出力の可変範囲の最大値 Ymとの関係から利得補正量 Kaを求 めるようにしても良い。 Furthermore, in the above example, the offset correction amount Kb from only the minimum level value Xb is a simple method for obtaining the offset correction amount Kb and the gain correction amount Ka of the minimum level value Xb and the maximum level value Xa. The average value of the maximum level value Xa and the minimum level value Xb (intermediate) Value) and the maximum value Ym of the output variable range of the correction means 21 may be used to determine the gain correction amount Ka.
図 6は、最小レベル値 Xbの値から利得補正量 Kaを求める場合に用いられる表の 一例である。  Fig. 6 is an example of a table used to determine the gain correction amount Ka from the minimum level value Xb.
[0056] 補正量決定手段 12はまた、階調変換制御信号 Kcにより、階調変換手段 8の階調 変換特性を制御する。補正量決定手段 12による階調変換特性の制御のための動作 の一例を図 7 (A)及び (B)、並びに図 8 (A)及び (B)を用いて説明する。  The correction amount determination unit 12 also controls the tone conversion characteristics of the tone conversion unit 8 based on the tone conversion control signal Kc. An example of the operation for controlling the gradation conversion characteristics by the correction amount determining means 12 will be described with reference to FIGS. 7 (A) and (B) and FIGS. 8 (A) and (B).
[0057] 図 7 (A)は、 AZD変換手段 3の出力 Scの 1フレームのヒストグラムを示しており、低 階調(図示の例では、階調値区分の代表値が 447以下の階調)の出現度数が高 ヽ 特性を示している。このような状態の被写体においては、図 7 (B)に破線 TCC1で示 す階調特性を用いることで、低階調のコントラストを強調した出力が可能となる。  FIG. 7 (A) shows a one-frame histogram of the output Sc of the AZD conversion means 3, and a low gradation (in the example shown, the gradation is a gradation value whose representative value is 447 or less). The appearance frequency of 高 shows high ヽ characteristics. For an object in such a state, the gradation characteristic indicated by the broken line TCC1 in FIG. 7B can be used to output with emphasized low gradation contrast.
[0058] 図 8 (A)は、中間階調(図示の例では、階調値区分の代表値が 447から 831の階 調)の出現度数が高い特性を示している。このような状態の被写体においては、図 8 ( B)に破線 TCC2で示す階調特性を用いることで、中間階調のコントラストを強調した 出力が可能となる。  FIG. 8A shows a characteristic in which the appearance frequency of the intermediate gradation (in the example shown, the gradation value category representative values are 447 to 831 gradation) is high. For a subject in such a state, the gradation characteristic shown by the broken line TCC2 in FIG.
[0059] このように、 AZD変換手段 3の出力 Scの階調値のヒストグラムに基づいて、階調変 換制御信号 Kcを決定し、階調変換制御信号 Kcにより階調特性を制御することが可 能となる。  As described above, it is possible to determine the gradation conversion control signal Kc based on the histogram of the gradation values of the output Sc of the AZD conversion means 3, and to control the gradation characteristics by the gradation conversion control signal Kc. It will be possible.
[0060] なお、制御の方法としては、ルックアップテーブルで実現された階調特性(図 7 (B) 、図 8 (B)に破線 TCC1、 TCC2で示される)を数個の特性を持ったうえで、 Kcに基 づき切替えることとしても良ぐ論理回路を使用し、折れ線で構成された階調特性 (図 7 (B)、図 8 (B)に実線 TCD1、 TCD2で示される)の折れ点の座標値などを階調変 換制御信号 Kcに基づき変更することとしても良い。  [0060] As a control method, the gradation characteristics realized by the lookup table (shown by broken lines TCC1 and TCC2 in FIGS. 7B and 8B) have several characteristics. In addition, using a logic circuit that can be switched based on Kc, the gradation characteristics composed of broken lines (shown by solid lines TCD1 and TCD2 in Fig. 7 (B) and Fig. 8 (B)) are broken. The coordinate value of the point may be changed based on the gradation conversion control signal Kc.
[0061] 図 9 (A)〜(G)は、実施の形態 1の撮像装置の動作を示すタイミングチャートである 。図 9 (A)の VDは、 1フレーム期間 (Tf)の同期信号 (垂直同期信号)を示している。 図 9 (B)は、 AZD変換出力 Scの出力のタイミングを示し、図 9 (C)は、撮像メモリ出 力 Sdの読出しのタイミングを示し、図 9 (D)は、最大レベル Xa、最小レベル Xb、補正 量 Ka、 Kbの決定のタイミングを示し、図 9 (E)は、補正量 Xa、 Xbの出力のタイミング を示し、図 9 (F)は、補正手段 6からの信号 Sfの出力のタイミングを示す。図 9 (G)は 、使用される階調変換特性 Ftを示す。 FIGS. 9A to 9G are timing charts showing the operation of the imaging apparatus of the first embodiment. VD in FIG. 9 (A) indicates a synchronization signal (vertical synchronization signal) in one frame period (Tf). Fig. 9 (B) shows the output timing of the AZD conversion output Sc, Fig. 9 (C) shows the readout timing of the imaging memory output Sd, and Fig. 9 (D) shows the maximum level Xa and the minimum level. The timing for determining Xb and correction amounts Ka and Kb is shown. Fig. 9 (E) shows the output timing of correction amounts Xa and Xb. FIG. 9 (F) shows the output timing of the signal Sf from the correction means 6. Fig. 9 (G) shows the tone conversion characteristics Ft used.
[0062] フレーム期間は、高速読出し、長時間露光など撮像状況に応じて時間間隔を調整 することも可能で、高速読み出しの場合は、フレーム期間が短く(フレームレートが高 く)設定され、長時間露光の場合は、フレーム期間が長く(フレームレートが低く)設定 される。 [0062] The frame period can be adjusted according to the imaging conditions such as high-speed reading and long exposure. In the case of high-speed reading, the frame period is set short (high frame rate) and long. In the case of time exposure, a long frame period (low frame rate) is set.
[0063] なお、階調変換手段 8における階調変換特性は図 9 (G)に示される Ft (a)が使用さ れるものとする。  Note that Ft (a) shown in FIG. 9 (G) is used as the gradation conversion characteristic in the gradation conversion means 8.
[0064] 第 1のフレーム期間 F1に、図 9 (B)に示される AZD出力 Sclが出力され、撮像信 号メモリ手段 5へ書込まれ、続く第 2のフレーム期間 F2で、図 9 (C)に示すように撮像 信号メモリ出力 Sdlとして読出される。以下同様であり、撮像メモリ出力 Sdi(i= l、 2、 ···)は AZD出力 Sci(i= l、 2、…;)と同じ内容のものである。  [0064] In the first frame period F1, the AZD output Scl shown in FIG. 9B is output and written to the imaging signal memory means 5, and in the subsequent second frame period F2, the AZD output Scl shown in FIG. As shown in), it is read out as the imaging signal memory output Sdl. The same applies to the following, and the imaging memory output Sdi (i = 1, 2,...) Has the same contents as the AZD output Sci (i = 1, 2,...).
[0065] 第 1のフレーム期間 F1には、図 9 (B)に示すように AZD出力 Sclが撮像メモリ手段 5へ書込まれるとともに、輝度分布検出手段 11に供給される。輝度分布検出手段 11 では、 AZD出力 Sclを受けてヒストグラムを生成し、第 1のフレーム期間 F1の最後の ブランキング期間 BLにおいて、図 9 (D)に示すように、最大レベル値 Xal,最小レべ ル値 Xblの検出が行われ、その結果が補正量決定手段 12に供給され、補正量決定 手段 12では、供給された最大レベル値 Xal、最小レベル値 Xblに基づいて第 1のフ レーム期間 F1に AZD変換手段 3から出力されたデータ Sclのためのオフセット補正 量 Kbl、利得補正量 Kalを決定する。  In the first frame period F1, as shown in FIG. 9B, the AZD output Scl is written to the imaging memory means 5 and supplied to the luminance distribution detecting means 11. The luminance distribution detection means 11 receives the AZD output Scl and generates a histogram. In the last blanking period BL of the first frame period F1, as shown in FIG. 9 (D), as shown in FIG. The detection of the bell value Xbl is performed, and the result is supplied to the correction amount determination means 12. The correction amount determination means 12 determines the first frame period based on the supplied maximum level value Xal and minimum level value Xbl. In F1, the offset correction amount Kbl and gain correction amount Kal for the data Scl output from the AZD conversion means 3 are determined.
決定されたオフセット補正量 Kb 1、利得補正量 Kalは、図 9 (E)に示すように、次の 第 2のフレーム期間 F2 (そのとき、第 1のフレーム期間 F1に AZD変換手段 3から出 力されたデータ Sc 1と同じデータ Sdlが撮像信号メモリ手段 4から出力される)にオフ セット補正手段 61及び利得補正手段 62に供給される。これらの補正量を用いた補 正の結果図 9 (F)に示される信号 Sfが補正手段 6から出力される。  As shown in FIG. 9 (E), the determined offset correction amount Kb 1 and gain correction amount Kal are output from the AZD conversion means 3 in the next second frame period F2 (at that time, the first frame period F1). The same data Sdl as the inputted data Sc 1 is output from the imaging signal memory means 4) and supplied to the offset correction means 61 and the gain correction means 62. As a result of correction using these correction amounts, the signal Sf shown in FIG.
[0066] 第 2のフレーム期間 F2には、撮像信号メモリ手段 5からのメモリ出力 Sdlの読み出 し(図 9 (C) )と、 AZD出力 Sc2 (次のフレームのデータ)の撮像メモリ手段 5への書き 込み(図 9 (B) )とが行われる。 [0067] 以下同様にして、第 iのフレーム期間 Fi(i= l、 2、 ···)に AZD変換手段 3から出力 されたデータ Sciと同じ内容のデータ Sdi (=Xi)が撮像信号メモリ手段 5から読み出 されて補正手段 6に供給されるときに、第 iのフレーム期間 Fiに AZD変換手段 3から 出力されたデータ Sciに基づ 、て決定されたオフセット補正量 Kbi、利得補正量 Kai が補正手段 6に供給されて、補正が行われる。 [0066] During the second frame period F2, the memory output Sdl is read from the imaging signal memory means 5 (FIG. 9C), and the imaging memory means 5 of the AZD output Sc2 (the data of the next frame). Writing to (Figure 9 (B)) is performed. [0067] In the same manner, data Sdi (= Xi) having the same contents as the data Sci output from the AZD conversion means 3 in the i-th frame period Fi (i = 1, 2,...) The offset correction amount Kbi and the gain correction amount determined based on the data Sci output from the AZD conversion means 3 during the i-th frame period Fi when read from the means 5 and supplied to the correction means 6 Kai is supplied to correction means 6 for correction.
[0068] この補正の内容は [0068] The content of this correction is
Yi= (Xi-Kbi) XKai (i= l、2、 -) ー(11)  Yi = (Xi-Kbi) XKai (i = l, 2,-) ー (11)
で表される。  It is represented by
補正撮像信号 Sf (=Y)は、階調変換手段 8で階調変換特性 Ft (a)による階調を受 け、その結果として得られる階調変換出力 Sgが画像処理手段 9へ出力される。  The corrected imaging signal Sf (= Y) is subjected to the gradation by the gradation conversion characteristic Ft (a) by the gradation conversion means 8, and the gradation conversion output Sg obtained as a result is output to the image processing means 9. .
[0069] 図 10は、実施の形態 1の処理手順を示すフローチャートである。 FIG. 10 is a flowchart showing a processing procedure of the first embodiment.
[0070] まず、ステップ S 1にて撮影を行!ヽ、被写体 2から得られる画像信号に対応した AZ[0070] First, shooting is performed in step S 1! ヽ, AZ corresponding to the image signal obtained from subject 2
D出力 Scを得る。ステップ S2では、最大レベル値 Xa、最小レベル値 Xbを検出する。 D output Sc is obtained. In step S2, the maximum level value Xa and the minimum level value Xb are detected.
[0071] ステップ S3では、最小レベル値 Xbと閾値 Xbt (例えば Xbt=400)との比較により、 補正を行うか判断を行う。最小レベル値 Xb閾値 Xb り大きい場合は、補正を行う必 要があると判断し、ステップ S4へ進む。 In step S3, it is determined whether or not to perform correction by comparing the minimum level value Xb with a threshold value Xbt (for example, Xbt = 400). If it is greater than the minimum level value Xb threshold value Xb, it is determined that correction is necessary, and the process proceeds to step S4.
[0072] ステップ S4では、利得補正量 Ka、オフセット補正量 Kbを算出する。利得補正量 K aとオフセット補正量 Kbの算出は、補正量決定手段 12について上記した方法で行わ れる。 In step S4, a gain correction amount Ka and an offset correction amount Kb are calculated. The calculation of the gain correction amount Ka and the offset correction amount Kb is performed by the method described above for the correction amount determination means 12.
[0073] ステップ S5では、利得補正量 Ka、オフセット補正量 Kbを用いて補正処理を行う。  In step S5, correction processing is performed using the gain correction amount Ka and the offset correction amount Kb.
ステップ S6では、階調変換を実施する。  In step S6, gradation conversion is performed.
[0074] ステップ S3で、最小レベル値 Xbが閾値 Xbt以下で、補正が不要と判断されると、補 正を行うことなく(即ち、オフセット補正手段 61で減算されることなく(言い換えるとオフ セット補正量 Kbが「0」とされ)、利得補正手段 62で利得を大きくされることなく (利得 補正量 Kaが「1」とされる)、階調変換 (ステップ S6)が実施される。  [0074] If it is determined in step S3 that the minimum level value Xb is equal to or smaller than the threshold value Xbt and correction is not necessary, correction is not performed (that is, without being subtracted by the offset correction means 61 (in other words, offset). The gradation conversion (step S6) is performed without increasing the gain by the gain correction means 62 (the gain correction amount Ka is set to “1”) without the correction amount Kb being “0”.
[0075] 上記のうち、ステップ S1の処理は、固体撮像素子 1により行われ、ステップ S2とステ ップ S3の処理は、輝度分布検出手段 11により行われ、ステップ S4の処理は補正量 決定手段 12により行われ、ステップ S5の処理は、補正手段 6により行われ、ステップ S6の処理は、階調変換手段 8により行われる。 [0075] Among the above, the process of step S1 is performed by the solid-state imaging device 1, the processes of step S2 and step S3 are performed by the luminance distribution detecting means 11, and the process of step S4 is the correction amount determining means. 12 and the processing of step S5 is performed by the correction means 6 and the step The process of S6 is performed by the gradation converting means 8.
[0076] なお、最大レベル値 Xa、最小レベル値 Xb、或いはヒストグラムから得られる他の情 報から、例えば、前フレーム力 極端に (大幅に)信号量が変化したことが検知された 場合、補正を行わないこととすることも可能である。  [0076] It should be noted that, for example, if it is detected from the maximum level value Xa, the minimum level value Xb, or other information obtained from the histogram that, for example, the signal amount of the previous frame force has changed extremely (significantly), correction is performed. It is also possible not to perform.
[0077] 被写体から得られる画像信号に対応する AZD出力 Scが、図 11 (B)に符号 Faで 示すようなものであり、ピークが略 1023、ボトムが略 623で振動する高周波数成分を 有するものと仮定する。図 11 (B)に示す画像信号 Faは、被写体が全体的に明るく( 直流成分が大きい場合)、低輝度の信号が少ない場合に得られるものである。  [0077] The AZD output Sc corresponding to the image signal obtained from the subject is as shown by reference numeral Fa in FIG. 11B, and has a high-frequency component that vibrates with a peak of approximately 1023 and a bottom of approximately 623. Assume that The image signal Fa shown in FIG. 11B is obtained when the subject is generally bright (when the DC component is large) and there are few low-luminance signals.
図 11 (A)では、階調変換手段 8の変換特性として、 y =0. 7の場合の γ特性を用 Vヽた場合を用いて説明を行う。  In FIG. 11 (A), the gradation conversion means 8 will be described using the case where the γ characteristic in the case of y = 0.7 is used as the conversion characteristic.
[0078] このような、信号 Faを仮にそのまま階調変換したとすると、階調変換手段 8の出力は 図 11 (C)に符号 Gaに示すように、入力よりも高周波数信号成分の振幅がより小さい ものとなる(ここで振幅は、階調変換手段 8の入力、出力がそれぞれ取り得る値の範 囲の最大値を 1として正規ィ匕した値で比較する)。  Assuming that the signal Fa is subjected to gradation conversion as it is, the output of the gradation converting means 8 has an amplitude of a higher frequency signal component than the input as indicated by reference numeral Ga in FIG. (Here, the amplitude is compared with a value obtained by normalizing the maximum value in the range of values that the input and output of the gradation conversion means 8 can take as 1).
[0079] そこで、オフセット補正を加え、高周波数信号成分のピーク値とボトム値の差を保つ たままで、低輝度側に矢印 FGで示すようにシフトして信号 Fbを形成した上で、階調 変換手段 8に入力して階調変換を行うと、階調変換手段 8の出力は図 11 (C)に符号 Gbで示すようになり、高周波数信号成分の振幅がより大きなものとなる。  [0079] Therefore, offset correction is applied, and the difference between the peak value and bottom value of the high-frequency signal component is maintained, and the signal Fb is formed by shifting to the low luminance side as indicated by the arrow FG. When gradation conversion is performed by inputting to the conversion means 8, the output of the gradation conversion means 8 becomes as indicated by the symbol Gb in FIG. 11C, and the amplitude of the high frequency signal component becomes larger.
[0080] さらに、オフセット補正をカ卩えるのみでなぐ利得補正を加えることにより、信号 Fcを 形成した上で、階調変換手段 8に入力して階調変換を行うと、階調変換手段 8の出力 は図 11 (C)に符号 Gcで示すようになり、高周波数信号成分の振幅が一層大きなも のとなる。  [0080] Further, when the gain conversion is performed only by measuring the offset correction, the signal Fc is formed, and when the gradation conversion is performed by inputting the signal Fc into the gradation conversion unit 8, the gradation conversion unit 8 The output of is as shown by Gc in Fig. 11 (C), and the amplitude of the high-frequency signal component is even greater.
このように、補正手段 21により信号 Faを信号 Fb又は Fcに変換した後に、階調変換 手段 8に入力することにより、階調変換手段 8の出力側に、高周波数信号成分、言換 えると画像の細部の細かなパターンを鮮明に再現することができる。そのため、人の 顔の特徴、血管パターン、指紋パターンなど、被写体の画像の細部の小さな輝度差 を鮮明に再現することができる。  In this way, after the signal Fa is converted into the signal Fb or Fc by the correcting means 21, and then input to the gradation converting means 8, a high frequency signal component, in other words, is output to the output side of the gradation converting means 8. It is possible to reproduce a fine pattern in detail of an image clearly. Therefore, small brightness differences in the details of the subject image, such as human facial features, blood vessel patterns, and fingerprint patterns, can be reproduced clearly.
[0081] 実施の形態 1の処理を実施することで、例えば、元画像の撮像結果が図 12 (A)及 び (B)に示すようなものであっても、図 12 (C)に示される補正された画像信号を得る ことができる。ここで、図 12 (A)〜(C)を参照して説明を加える。図 12 (A)は、被写体 の元画像の概略を示し、図 12 (B)は図 12 (A)の破線 SLに沿う画像信号 (破線 SL上 の画素から信号を順に読み出すことにより得られる信号)を示し、図 12 (C)は、図 12 (B)の信号に対し本実施の形態による補正を加えることにより得られる信号を示す。 図 12 (A)に示すように、指紋は、隆線部とその他の部分で反射光量 (光学系の構 成によっては、散乱光量、透過光量の場合もありえる。)が異なることで、図 12 (B)に 示すとおり、信号量として指紋の凹凸に対する信号差 (コントラスト)が出力される。ま た、指中央が、指の端に対して明るい色をしている。このような被写体を撮像した場 合、図 12 (B)に示すような、指の端力 中央にかけて信号レベルが大きくなり、指紋 凹凸の信号のコントラストを重畳した信号が出力される。 [0081] By performing the processing of Embodiment 1, for example, the imaging result of the original image is changed as shown in FIG. Even if it is as shown in (B), the corrected image signal shown in FIG. 12 (C) can be obtained. Here, a description will be added with reference to FIGS. Fig. 12 (A) shows an outline of the original image of the subject, and Fig. 12 (B) shows an image signal along the broken line SL in Fig. 12 (A) (a signal obtained by sequentially reading signals from the pixels on the broken line SL. FIG. 12C shows a signal obtained by applying the correction according to the present embodiment to the signal of FIG. 12B. As shown in Fig. 12 (A), the amount of reflected light differs between the ridge and other parts (the amount of scattered light and the amount of transmitted light may be different depending on the configuration of the optical system). As shown in (B), the signal difference (contrast) with respect to the fingerprint irregularities is output as the signal amount. Also, the center of the finger is lighter than the edge of the finger. When such a subject is imaged, the signal level increases toward the center of the finger force as shown in FIG. 12 (B), and a signal in which the contrast of the fingerprint unevenness signal is superimposed is output.
実施の形態 1の処理を実施することで、図 12 (C)に示すように指紋の凹凸のコント ラストがはっきりとした信号を出力することができる。  By performing the processing of Embodiment 1, it is possible to output a signal in which the contrast of the unevenness of the fingerprint is clear as shown in FIG.
図 12 (A)〜(C)は一例であり、コントラストを強調したい信号に対して、同様の効果 を実現することができる。  Figures 12 (A) to 12 (C) are examples, and the same effect can be achieved for signals for which contrast is to be enhanced.
[0082] 実施の形態 2. [0082] Embodiment 2.
実施の形態 2の撮像装置の全体的構成は、図 1に示されるごとくであるが、以下の 点で異なる。即ち、実施の形態 1では、 1フレームの画素信号の最大レベル値、最小 レベル値を求め、これに基づきそのフレームの画素データのためのオフセット補正量 Kb,利得補正量 Kaを求めて、同じフレームのデータに対する補正を行っているが、 図 13に示すように画面を複数のエリア乃至ブロック DV1〜DV12に分け、それぞれ のエリアに対して別個に最大レベル値、最小レベル値を検出し、オフセット補正量 K b,利得補正量 Kaを定めて、これを用いてそれぞれのエリア内の画素データに対す る補正を行うようにしても良 、。  The overall configuration of the imaging apparatus of the second embodiment is as shown in FIG. 1, but differs in the following points. That is, in Embodiment 1, the maximum level value and the minimum level value of the pixel signal of one frame are obtained, and based on this, the offset correction amount Kb and the gain correction amount Ka for the pixel data of that frame are obtained, and the same frame is obtained. As shown in Fig. 13, the screen is divided into multiple areas or blocks DV1 to DV12, and the maximum level value and the minimum level value are detected separately for each area, and offset correction is performed. The amount Kb and the gain correction amount Ka may be determined and used to correct the pixel data in each area.
[0083] オフセット補正手段 61、利得補正手段 62、及び輝度分布検出手段 11、及び補正 量決定手段 12が、エリアごとの処理を実現するために、輝度分布検出手段 11が、ェ リアごとにヒストグラムを生成し、最大レベル値 Sea及び最小レベル値 Scbを求め、補 正量決定手段 12がエリア毎にオフセット補正量 Kb及び利得補正量 Kaを定める。そ して、補正手段 6では、各エリア内の画素の画素データに対して、オフセット補正量 K b及び利得補正量 Kaを用いて補正を行う。その他の点では、図 1などを参照して説 明した実施の形態 1と同じである。 [0083] In order for the offset correction means 61, the gain correction means 62, the luminance distribution detection means 11, and the correction amount determination means 12 to perform processing for each area, the luminance distribution detection means 11 has a histogram for each area. The maximum level value Sea and the minimum level value Scb are obtained, and the correction amount determination means 12 determines the offset correction amount Kb and the gain correction amount Ka for each area. So Then, the correction means 6 corrects the pixel data of the pixels in each area using the offset correction amount Kb and the gain correction amount Ka. The other points are the same as in the first embodiment described with reference to FIG.
[0084] 図 14は、実施の形態 2の処理手順を示す。図 14は、図 10と概して同じであるが、 エリアごとの処理 (ステップ S8、ステップ S9、ステップ S10、ステップ SI 1)が追加され た点で異なる。 FIG. 14 shows a processing procedure of the second embodiment. FIG. 14 is generally the same as FIG. 10, except that processing for each area (step S8, step S9, step S10, step SI 1) is added.
[0085] ステップ S8では、エリア数を初期化(0に設定)する。ステップ S9では、画面内の最 後のエリア力 (図 13に示す例では 12番目のエリア力)の判定を行う。ステップ S9で N Oであれば、ステップ S2〖こ戻る。ステップ S9で YESであれば、ステップ S10に進む。 ステップ S10では、エリア数を初期化(0に設定)する。ステップ S11では、画面内の 最後のエリア力 (図 13に示す例では 12番目のエリア力)の判定を行う。ステップ S11 で NOであれば、ステップ S3に戻る。ステップ S 11で YESであれば、ステップ S6に進 む。  In step S8, the number of areas is initialized (set to 0). In step S9, the final area force in the screen (the 12th area force in the example shown in FIG. 13) is determined. If NO in step S9, go back to step S2. If YES in step S9, the process proceeds to step S10. In step S10, the number of areas is initialized (set to 0). In step S11, the final area force in the screen (the 12th area force in the example shown in FIG. 13) is determined. If NO in step S11, the process returns to step S3. If YES in step S11, proceed to step S6.
[0086] ステップ S8及びステップ S9の処理は、輝度分布検出手段 11で行われ、ステップ S 10及びステップ S11の処理は、補正量決定手段 12で行われる。  [0086] The processing of step S8 and step S9 is performed by the luminance distribution detection means 11, and the processing of step S10 and step S11 is performed by the correction amount determination means 12.
[0087] 画面を複数のエリアに区切ることで、エリア毎の明るさのオフセット分を補正すること が可能となり、エリア毎に最適なコントラストの改善を行うことができる。  [0087] By dividing the screen into a plurality of areas, it is possible to correct the offset of the brightness for each area, and it is possible to improve the contrast optimum for each area.
[0088] 実施の形態 3.  [0088] Embodiment 3.
実施の形態 3の撮像装置の全体的構成は、図 1に示されるごとくであるが、実施の 形態 1及び実施の形態 2と異なり、エリア毎に補正を行うことで、撮像信号メモリ手段 5 として、メモリ容量が 1フレームよりも小さいもの、例えば数段(2乃至 8段程度)のシフ トレジスタ、ラインメモリ(画素データ 1ライン分のメモリ)、 FIFOで構成したものを用い ることちでさる。  The overall configuration of the imaging apparatus of the third embodiment is as shown in FIG. 1, but unlike the first and second embodiments, the imaging signal memory means 5 can be obtained by performing correction for each area. It is possible to use a memory with a memory capacity smaller than one frame, for example, a shift register with several stages (about 2 to 8 stages), a line memory (memory for one line of pixel data), and a FIFO.
[0089] また、輝度分布検出手段 11は、固体撮像素子の撮像画面内の任意の位置の画素 を中心として、水平方向前後数画素、ならびに垂直方向前後数ラインの画素のデー タを、例えばラインメモリから読み出し、それを元に、ヒストグラムを作成して最大レべ ル値 Sca、最小レベル値 Scbを検出する。  In addition, the luminance distribution detection means 11 uses, for example, line data of pixels of several pixels in the horizontal direction and several pixels in the vertical direction around the pixel at an arbitrary position in the imaging screen of the solid-state imaging device as a line. Read out from memory and create a histogram based on it to detect maximum level value Sca and minimum level value Scb.
これ以外の点では、実施の形態 3は、実施の形態 1及び実施の形態 2の形態と同じ であるが、実施の形態 1や実施の形態 2と同様の効果が得られる。また、フレームメモ リを用いる必要が無いので、処理の高速化、コスト低減できる。 In other respects, Embodiment 3 is the same as Embodiment 1 and Embodiment 2. However, the same effect as in the first and second embodiments can be obtained. Further, since there is no need to use frame memory, processing speed can be increased and costs can be reduced.
[0090] 図 15 (A)〜(G)は、実施の形態 3のタイミングチャートである。図中、図 15 (A)に示 される「PS」は、基本的な処理の時間単位の同期信号を示し、この同期信号で区切 られる処理周期内 PI, P2, P3,…で、輝度分布検出、利得補正量 Ka、オフセット補 正量 Kb決定などが実行される。処理周期は、固体撮像素子 1の水平転送クロックを 最小単位として、 1フレーム期間より短い期間である。 FIGS. 15A to 15G are timing charts of the third embodiment. In the figure, “PS” shown in FIG. 15 (A) indicates a synchronization signal in the basic unit of time of processing, and the luminance distribution is represented by PI, P2, P3,... Within the processing period divided by this synchronization signal. Detection, gain correction amount Ka, offset correction amount Kb determination, etc. are executed. The processing cycle is a period shorter than one frame period with the horizontal transfer clock of the solid-state imaging device 1 as a minimum unit.
なお、基本的な処理の時間単位としては、 1ブロックの信号処理を実現できる時間 以上の時間を用いる。  In addition, as a unit of time for basic processing, a time longer than the time that can realize signal processing of one block is used.
[0091] 図 15 (B)は、 AZD変換出力 Scの出力のタイミングを示し、図 15 (C)は、撮像メモ リ出力 Sdの読出しのタイミングを示し、図 15 (D)は、最大レベル Xa、最小レベル Xb、 補正量 Ka、 Kbの決定のタイミングを示し、図 15 (E)は、補正量 Xa、 Xbの出力のタイ ミングを示し、図 15 (F)は、補正手段 6からの信号 Sfの出力のタイミングを示す。図 1 5 (G)は、使用される階調変換特性 Ftを示す。  FIG. 15B shows the output timing of the AZD conversion output Sc, FIG. 15C shows the readout timing of the imaging memory output Sd, and FIG. 15D shows the maximum level Xa Figure 15 (E) shows the timing of output of correction amounts Xa and Xb, and Fig. 15 (F) shows the signal from correction means 6. Indicates the output timing of Sf. Fig. 15 (G) shows the tone conversion characteristics Ft used.
[0092] なお、階調変換手段 8においては、同じ階調変換特性 (符号 Ft (a)で示される)が 使用され続ける。  Note that, in the gradation conversion means 8, the same gradation conversion characteristic (indicated by the symbol Ft (a)) continues to be used.
[0093] 第 1の処理周期 P1には、図 15 (B)に示すように、 AZD出力 Sclが撮像メモリ手段 5に書き込まれるとともに、輝度分布検出手段 11に供給される。輝度分布検出手段 1 1では、 AZD出力 Sclを受けてヒストグラムを生成し、第 1の処理周期 F1の最後のブ ランキング期間 BLにおいて、図 15 (D)に示すように、最大レベル値 Xal,最小レべ ル値 Xblの検出が行われ、その結果が補正量決定手段 12に供給され、補正量決定 手段 12では、供給された最大レベル値 Xal、最小レベル値 Xblに基づいて第 1の処 理周期 P1に AZD変換手段 3から出力されたデータ Sclのためのオフセット補正量 Kbl、利得補正量 Kalを決定する。  In the first processing cycle P1, as shown in FIG. 15B, the AZD output Scl is written into the imaging memory means 5 and supplied to the luminance distribution detection means 11. The luminance distribution detection means 1 1 receives the AZD output Scl and generates a histogram, and in the last blanking period BL of the first processing cycle F1, as shown in FIG. 15 (D), the maximum level value Xal, minimum The level value Xbl is detected, and the result is supplied to the correction amount determination means 12. The correction amount determination means 12 performs the first processing based on the supplied maximum level value Xal and minimum level value Xbl. The offset correction amount Kbl and gain correction amount Kal for the data Scl output from the AZD conversion means 3 are determined in the period P1.
[0094] 決定されたオフセット補正量 Kb 1、利得補正量 Kalは、図 15 (E)に示されるように 、次の第 2の処理周期 P2 (そのとき、第 1の処理周期 P1に AZD変換手段 3から出力 されたデータ Sclと同じデータ Sdl (図 15 (C) )が撮像信号メモリ手段 5から出力され る)にオフセット補正手段 61及び利得補正手段 62に供給される。これらの補正量を 用いた補正の結果図 15 (F)に示される信号 Sfが補正手段 6から出力される。 [0094] The determined offset correction amount Kb1 and gain correction amount Kal are obtained by performing AZD conversion on the next second processing cycle P2 (at that time, the first processing cycle P1 as shown in FIG. 15E). The same data Sdl (FIG. 15 (C)) as the data Scl output from the means 3 is supplied to the offset correction means 61 and the gain correction means 62 to the image signal memory means 5). These correction amounts As a result of the correction used, the signal Sf shown in FIG.
[0095] 次の第 2の処理周期 P2には、撮像信号メモリ手段 5からのメモリ出力 Sdlの読み出 し(図 15 (C) )と、 AZD出力 Sc2 (次の処理周期のデータ)の撮像メモリ手段 5への 書き込み(図 15 (B) )とが行われる。 [0095] In the next second processing cycle P2, reading of the memory output Sdl from the imaging signal memory means 5 (Fig. 15C) and imaging of the AZD output Sc2 (data of the next processing cycle) are performed. Writing to the memory means 5 (FIG. 15B) is performed.
[0096] 以下同様にして、第 iの処理周期 Pi (i= l、 2、…;)に AZD変換手段 3から出力され たデータ Sciと同じ内容のデータ Sdi (=Xi)が撮像信号メモリ手段 5から読み出され て補正手段 6に供給されるときに、第 iの処理周期 Fiに AZD変換手段 3から出力さ れたデータ Sciに基づ 、て決定されたオフセット補正量 Kbi、利得補正量 Kaiが補正 手段 6に供給されて、補正が行われる。 In the same manner, data Sdi (= Xi) having the same contents as the data Sci output from the AZD conversion means 3 in the i-th processing cycle Pi (i = 1, 2,. The offset correction amount Kbi and the gain correction amount determined based on the data Sci output from the AZD conversion means 3 in the i-th processing cycle Fi when read from 5 and supplied to the correction means 6 Kai is supplied to correction means 6 for correction.
[0097] この補正の内容は、 [0097] The content of this correction is
Yi= (Xi-Kbi) XKai (ί= 1、2、 · ··) - -- (12)  Yi = (Xi-Kbi) XKai (ί = 1, 2, ...)--(12)
で表される。式(12)は式(11)と同様であるが、 iがフレームの番号ではなぐ処理 期間の番号を表す点で異なる。  It is represented by Equation (12) is similar to Equation (11), except that i represents the number of the processing period, not the frame number.
[0098] 処理周期 Piごとに補正を実現できることにより、被写体の画像の細部の小さな輝度 差検出を高速に実現することができる。  [0098] Since correction can be realized for each processing cycle Pi, it is possible to realize high-speed detection of small luminance differences in the details of the subject image.
[0099] 実施の形態 4.  [0099] Embodiment 4.
図 16はこの発明の実施の形態 4の構成を示すブロック図である。図 16において図 1と同一の符号は同様の部材を示す。実施の形態 4の撮像装置は、実施の形態 1の 撮像装置と概して同じであるが、実施の形態 1の撮像信号メモリ手段 5を用いておら ず画像処理手段 9を撮像装置本体に内蔵せず、補正制御手段 7と被写体形状認識 手段 10を撮像装置本体に内蔵しない撮像装置の構成を示している。ここで、画像処 理手段 9、補正制御手段 7、被写体形状認識手段 10は、パーソナルコンピュータ、マ イコン等の信号処理可能な信号処理装置で構成されて 、る。  FIG. 16 is a block diagram showing the configuration of the fourth embodiment of the present invention. In FIG. 16, the same reference numerals as those in FIG. 1 denote the same members. The imaging apparatus of the fourth embodiment is generally the same as the imaging apparatus of the first embodiment, but does not use the imaging signal memory means 5 of the first embodiment and does not incorporate the image processing means 9 in the imaging apparatus main body. 2 shows a configuration of an imaging apparatus in which the correction control means 7 and the subject shape recognition means 10 are not built in the imaging apparatus main body. Here, the image processing means 9, the correction control means 7, and the subject shape recognition means 10 are composed of a signal processing device capable of signal processing such as a personal computer and a microcomputer.
図 16に示される撮像装置はさらに、逆階調変換手段 13を備え、図 1の輝度分布検 出手段 11の代わりに輝度分布検出手段 14が設けられている点で異なる。補正量決 定手段 12と、逆階調変換手段 13と、輝度分布検出手段 14で補正制御手段 7が構 成されている。  The imaging apparatus shown in FIG. 16 is further different from that of FIG. 16 in that a reverse gradation conversion unit 13 is provided and a luminance distribution detection unit 14 is provided instead of the luminance distribution detection unit 11 of FIG. The correction amount determining means 12, the inverse gradation converting means 13, and the luminance distribution detecting means 14 constitute a correction control means 7.
[0100] 逆階調変換手段 13は、階調変換手段 8の階調変換特性とは逆の階調変換特性を 有するものである。即ち、階調変換手段 8の変換特性と逆階調変換特性 13の変換特 性の積は線形性を持つ。従って、逆階調変換手段 13の出力 Sgは固体撮像素子 1の 撮像出力に比例した信号である。 [0100] Inverse gradation conversion means 13 has gradation conversion characteristics opposite to the gradation conversion characteristics of gradation conversion means 8. It is what you have. That is, the product of the conversion characteristic of the gradation conversion means 8 and the conversion characteristic of the inverse gradation conversion characteristic 13 has linearity. Therefore, the output Sg of the reverse gradation converting means 13 is a signal proportional to the imaging output of the solid-state imaging device 1.
輝度分布検出手段 14は、 AZD変換手段 3の出力ではなぐ逆階調変換手段 13 の出力 Shを受けて、ヒストグラムの生成、最大レベル値 Sha、最小レベル値 Shbの検 出を行う。この場合、実施の形態 1についての説明の最大レベル値 Xa、最小レベル 値 Xbとして Sha、 Shbを用 ヽる。  The luminance distribution detection means 14 receives the output Sh of the reverse gradation conversion means 13 not the output of the AZD conversion means 3, and generates a histogram and detects the maximum level value Sha and the minimum level value Shb. In this case, Sha and Shb are used as the maximum level value Xa and the minimum level value Xb described in the first embodiment.
補正量決定手段 12は、輝度分布検出手段 14で検出された最大レベル値 Sha、最 小レベル値 Shbに基づき、利得補正量 Ka、オフセット補正量 Kbを算出する。  The correction amount determination means 12 calculates the gain correction amount Ka and the offset correction amount Kb based on the maximum level value Sha and the minimum level value Shb detected by the luminance distribution detection means 14.
[0101] 階調変換手段 8で階調変換された信号 Sgを、逆階調変換手段 13で元の階調特性 [0101] The signal Sg obtained by gradation conversion by the gradation conversion means 8 is converted into the original gradation characteristic by the inverse gradation conversion means 13.
(階調変換を受ける前の特性)の信号 Shに戻すので、輝度分布検出手段 14及び補 正量決定手段 12では、階調変換手段 8の階調変換特性を考慮することなくデータ処 理を行うことができる。  Since it is returned to the signal Sh of (characteristic before undergoing gradation conversion), the luminance distribution detecting means 14 and correction amount determining means 12 perform data processing without considering the gradation conversion characteristics of the gradation converting means 8. It can be carried out.
[0102] 実施の形態 1の形態では、輝度分布検出手段 14が AZD出力に基づいて補正の 制御 (輝度分布検出、即ち、ヒストグラム生成、最大レベル値、最小レベル値の検出、 及びこれに基づく Ka、 Kbの決定)を行なうが、実施の形態 4では、階調変換手段 8の 出力 Sgを受ける逆階調変換手段 13の出力 Shに基づいて補正の制御を行う点が異 なる。  [0102] In the first embodiment, the luminance distribution detecting means 14 controls correction based on the AZD output (luminance distribution detection, ie, histogram generation, detection of maximum level value, minimum level value, and Ka based on this control). However, the fourth embodiment is different in that the correction is controlled based on the output Sh of the inverse gradation converting means 13 that receives the output Sg of the gradation converting means 8.
[0103] 図 17は、実施の形態 4の処理手順を示すフローチャートである。図 17は、概して図 10と同じであるが、逆階調変換を行うステップ S 12が付加されている点で異なる。ス テツプ S 12の処理は、逆階調変換手段 12でおこなわれる。  FIG. 17 is a flowchart showing a processing procedure of the fourth embodiment. FIG. 17 is generally the same as FIG. 10 except that step S12 for performing reverse gradation conversion is added. The process of step S12 is performed by the reverse gradation converting means 12.
[0104] 図 18 (A)〜(G)は、実施の形態 4の撮像装置の動作を示すタイミングチャートであ る。図 18 (A)〜(G)は、図 9 (A)〜(G)と同様である。但し、図 18 (C)は、逆階調変 換手段 13の出力を示す。また、一つのフレーム((i— 1)番目のフレーム) F (i— 1)の 画素データ Sh (i— 1)に基づ!/、て決定された補正量 Ka (i— 1)、 Kb (i 1)が次のフ レーム(i番目のフレーム) Fiの画素データ Sciの補正に用いられる。  FIGS. 18A to 18G are timing charts showing the operation of the imaging apparatus of the fourth embodiment. FIGS. 18A to 18G are the same as FIGS. 9A to 9G. However, FIG. 18 (C) shows the output of the inverse gradation converting means 13. Also, the correction amount Ka (i-1), Kb determined based on the pixel data Sh (i-1) of one frame ((i-1) th frame) F (i-1)! (i 1) is used to correct the pixel data Sci of the next frame (i-th frame) Fi.
従って、補正手段 21における補正の内容は、  Therefore, the correction contents in the correction means 21 are as follows:
Yi= {Xi-Kb (i- 1) } X Ka (i- 1) (i= l、 2、 · ··) •••(13) Yi = {Xi-Kb (i- 1)} X Ka (i- 1) (i = l, 2, ...) •••(13)
で表される。  It is represented by
[0105] 一般的に、動画像を撮影した場合、連続する 2フレームの画素データの相関は高 い。図 18 (C)、(D)、 (E)に示されるように前フレームの画素データに基づいて決定 された補正データを用いて補正を行うことで、動画像の補正を実現することができる。 なお、一つのフレーム((i— 1)番目のフレーム) F (i- l)の画素データ Sh (i— 1)と 、次のフレーム(i番目のフレーム) F (i)の画素データ Sh (i)の双方に基づいて、さら にその次のフレーム((i+ 1)番目のフレームの補正量 Ka (i+ 1)、 Kb (i+ 1)を決定 することとしても良い。  [0105] Generally, when a moving image is captured, the correlation between pixel data of two consecutive frames is high. As shown in FIGS. 18 (C), (D), and (E), moving image correction can be realized by performing correction using correction data determined based on pixel data of the previous frame. . It should be noted that the pixel data Sh (i—1) of one frame ((i−1) th frame) F (i−l) and the pixel data Sh (of the next frame (ith frame) F (i) Based on both i), the correction amounts Ka (i + 1) and Kb (i + 1) of the next frame ((i + 1) th frame) may be determined.
[0106] 以上のように撮像装置本体外に信号処理機能を有することで、撮像装置内に高性 能なマイコンを用いることなぐ撮像装置のコスト削減、実装面積の削減が実現できる 。また、パーソナルコンピュータによる補正精度の向上も図れる。  As described above, by having a signal processing function outside the imaging apparatus main body, it is possible to reduce the cost and mounting area of the imaging apparatus without using a high-performance microcomputer in the imaging apparatus. Further, the correction accuracy by the personal computer can be improved.
[0107] なお、実施の形態 4についても、実施の形態 2で説明したように複数のエリアに分割 して処理する構成とすることができる。  [0107] Note that the fourth embodiment can also be configured to be divided into a plurality of areas and processed as described in the second embodiment.
[0108] 実施の形態 5.  [0108] Embodiment 5.
図 19はこの発明の実施の形態 5の撮像装置の構成を示すブロック図である。図 19 において図 16と同一の符号は同様の部材を示す。実施の形態 5の撮像装置は、実 施の形態 4の撮像装置と概して同じであるが、実施の形態 4の逆階調変換手段 13を 用いておらず、実施の形態 4の階調変換手段 8の代わりに階調変換手段 15が用いら れ、実施の形態 4の補正量決定手段 12の代わりに補正量決定手段 16が用いられ、 輝度分布輝度分布検出手段 14が、階調変換手段 15の出力を受けてそのヒストグラ ムを生成し、最大レベル値 Sgh、最小レベル値 Shbを検出する点が異なる。さら〖こ、 階調変換手段 15の変換特性を制御する変換特性制御手段 17を備えている。変換 特性制御手段 17も、 AZD変換手段 3からの出力にタイミングを合わせて動作するよ うに(図示しない制御手段により)制御されている。実施の形態 5では、変換特性制御 手段 17と、輝度分布検出手段 14と、補正量決定手段 16とで、補正制御手段 7が構 成されている。  FIG. 19 is a block diagram showing the configuration of the imaging apparatus according to the fifth embodiment of the present invention. In FIG. 19, the same reference numerals as those in FIG. 16 denote the same members. The image pickup apparatus of the fifth embodiment is generally the same as the image pickup apparatus of the fourth embodiment, but does not use the reverse gradation conversion means 13 of the fourth embodiment, and the gradation conversion means of the fourth embodiment. The gradation conversion means 15 is used instead of 8, the correction amount determination means 16 is used instead of the correction amount determination means 12 of Embodiment 4, and the luminance distribution luminance distribution detection means 14 is replaced with the gradation conversion means 15 The difference is that a histogram is generated in response to the output of, and the maximum level value Sgh and the minimum level value Shb are detected. Furthermore, a conversion characteristic control means 17 for controlling the conversion characteristics of the gradation conversion means 15 is provided. The conversion characteristic control means 17 is also controlled to operate in synchronization with the output from the AZD conversion means 3 (by a control means not shown). In the fifth embodiment, the conversion control means 17, the luminance distribution detection means 14, and the correction amount determination means 16 constitute the correction control means 7.
[0109] 変換特性制御手段 17は、補正量決定手段 16からの階調変換制御信号 Kcと、フレ ーム単位のタイミングを示す信号を受け、フレーム単位で、階調変換手段 15の階調 変換特性を、線形変換と非線形変換の間で、切り換えることができる。 [0109] The conversion characteristic control means 17 receives the gradation conversion control signal Kc from the correction amount determination means 16 and the frame rate. The tone conversion characteristic of the tone conversion means 15 can be switched between linear conversion and non-linear conversion in units of frames by receiving a signal indicating the timing in units of frames.
[0110] これ以外の点では、実施の形態 5は、実施の形態 4と同じであり、実施の形態 4と略 同じ効果が得られる。 [0110] In other respects, the fifth embodiment is the same as the fourth embodiment, and substantially the same effects as the fourth embodiment are obtained.
[0111] 図 20 (A)〜(G)は、実施の形態 5の撮像装置の動作を示すタイミングチャートであ る。図 20 (A)〜(G)は、図 18 (A)〜(G)と同様である。但し、以下の点で異なる。  FIGS. 20A to 20G are timing charts showing the operation of the imaging apparatus of the fifth embodiment. FIGS. 20A to 20G are the same as FIGS. 18A to 18G. However, it differs in the following points.
[0112] 第 1のフレーム期間 F1では、補正量決定手段 16は、変換特性制御手段 17に対し 、階調変換手段 15が線形変換を行うよう指示する。図 20 (G)には、線形変換を行う ときの階調変換特性を Ft (b)で表して ヽる。階調変換手段 15が線形変換を行うとき は、階調変換手段 15の出力は固体撮像素子 1の撮像出力に比例した信号となる。 そして、階調変換手段 15の出力 Sglに基づき輝度分布検出手段 14がヒストグラム の生成し、最大レベル値 Sgal、最小レベル値 Sgblの検出を行い、補正量決定手段 16が輝度分布検出手段 14からの最大レベル値 Sga 1、最小レベル値 Sgb 1に基づき 、利得補正量 Kal、オフセット補正量 Kblを定める。この場合、実施の形態 1につい ての説明の最大レベル値 Xa、 Xbとして Sgal、 Sgblを用いる。  [0112] In the first frame period F1, the correction amount determination unit 16 instructs the conversion characteristic control unit 17 so that the gradation conversion unit 15 performs linear conversion. In Fig. 20 (G), the gradation conversion characteristics when performing linear conversion are expressed as Ft (b). When the gradation conversion means 15 performs linear conversion, the output of the gradation conversion means 15 is a signal proportional to the imaging output of the solid-state imaging device 1. Then, the luminance distribution detecting means 14 generates a histogram based on the output Sgl of the gradation converting means 15, detects the maximum level value Sgal and the minimum level value Sgbl, and the correction amount determining means 16 outputs from the luminance distribution detecting means 14. Based on the maximum level value Sga 1 and the minimum level value Sgb 1, the gain correction amount Kal and the offset correction amount Kbl are determined. In this case, Sgal and Sgbl are used as the maximum level values Xa and Xb in the description of the first embodiment.
[0113] 第 2のフレーム F2のみ、或いは第 2のフレーム F2以降の数フレーム(例えば第 2の フレーム F2及び,第 3のフレーム F3)では、上記の第 1のフレームのデータに基づい て定められた Kal、 Kblを用いて補正を行い、階調変換手段 15は補正量決定手段 16の出力に基づいて変換特性制御手段 17で選択された非線形の階調変換特性( 符号 Ft (a)で示す)を用いて、階調変換を行う。  [0113] Only the second frame F2 or several frames after the second frame F2 (for example, the second frame F2 and the third frame F3) are determined based on the data of the first frame. Kal and Kbl are used for correction, and the tone conversion means 15 is represented by a non-linear tone conversion characteristic (reference Ft (a)) selected by the conversion characteristic control means 17 based on the output of the correction amount determination means 16. ) To perform tone conversion.
[0114] このように、実施の形態 5では、あるフレーム(例えば F1)のデータに基づいて決定 された利得補正量 Kal、オフセット補正量 Kblを次のフレーム(F2)のみ或いは次の フレーム以降の数フレーム(例えば F2, F3)の撮像データの補正に用いる。  As described above, in the fifth embodiment, the gain correction amount Kal and the offset correction amount Kbl determined based on the data of a certain frame (for example, F1) are set only for the next frame (F2) or after the next frame. Used to correct image data of several frames (eg F2, F3).
[0115] 図 21は、実施の形態 5の処理手順を示すフローチャートである。図 21は、概して図 10と同じであるが、ステップ S 14及びステップ S 15が付加されている点で異なる。ス テツプ S14では、階調変換特性として線形の階調変換特性 Ft (b)を設定する。ステツ プ S15では、階調変換特性として、非線形の階調変換特性 Ft (a)を設定する。ステツ プ S14及びステップ S15の処理は、変換制御手段 17により行われる。 [0116] この処理を行うことで、実施の形態 4の逆階調変換手段 13を設けなくても、静止画 撮影を行う際に、あるフレームのデータに基づき、オフセット補正量 Kbl、利得補正 量 Kalの決定を行えば、その後、数フレームにわたりそのオフセット補正量 Kbl,利 得補正量 Kalを用いて補正を行うことができ、 1フレーム毎に利得補正量 Ka、オフセ ット補正量 Kbの算出を行う必要が無くなり、撮像に要する時間を短縮することができ る。 FIG. 21 is a flowchart showing a processing procedure of the fifth embodiment. FIG. 21 is generally the same as FIG. 10 except that step S14 and step S15 are added. In step S14, a linear gradation conversion characteristic Ft (b) is set as the gradation conversion characteristic. In step S15, a non-linear gradation conversion characteristic Ft (a) is set as the gradation conversion characteristic. The processing of step S14 and step S15 is performed by conversion control means 17. By performing this process, the offset correction amount Kbl and the gain correction amount are obtained based on the data of a certain frame when taking a still image without providing the inverse gradation conversion means 13 of the fourth embodiment. Once Kal has been determined, correction can be performed using the offset correction amount Kbl and gain correction amount Kal over several frames, and gain correction amount Ka and offset correction amount Kb are calculated for each frame. This eliminates the need to perform imaging and shortens the time required for imaging.
[0117] 実施の形態 6.  [0117] Embodiment 6.
以上の実施の形態 1乃至 5では、輝度分布検出手段(11, 14)がヒストグラムを生成 し、最小レベル値 Xa、最大レベル値 Xbを求めている力 被写体自身のコントラスト( 例えば、指紋検出の隆線、顔検出の目耳鼻など)の変化は、照明光のコントラスト変 化の周波数成分に比べ、高周波になるため、 AZD出力 Scの低周波数成分、高周 波数成分の振幅などを検出することにより、 1フレーム内の局所的最大レベル値、最 小レベル値を求めることができる。  In the first to fifth embodiments described above, the luminance distribution detecting means (11, 14) generates a histogram and obtains the minimum level value Xa and the maximum level value Xb. Line, face detection, ear, nose, etc.) is higher than the frequency component of the contrast change of the illumination light, so by detecting the amplitude of the low-frequency component and high-frequency component of the AZD output Sc The local maximum level value and the minimum level value within one frame can be obtained.
[0118] 図 22は、そのような輝度分布検出手段 18の一例を示す。図 23 (A)及び (B)は、図 22の輝度分布手段 18の動作を示す波形図である。なお、処理されるデータはデジ タル信号である力 図 23 (A)及び (B)では、デジタル信号の値を結ぶ連続した曲線 で表している。  FIG. 22 shows an example of such a luminance distribution detection means 18. FIGS. 23A and 23B are waveform diagrams showing the operation of the luminance distribution means 18 of FIG. Note that the data to be processed is represented by a continuous curve connecting the digital signal values in Fig. 23 (A) and (B), which are digital signals.
以下、図 22の輝度分布検出手段 18を図 1の輝度分布検出手段 11の代わりに用い る場合について説明するが、図 22の輝度分布検出手段 14は、図 16や図 19の輝度 分布検出手段 14の代わりに用いることもできる。  Hereinafter, the case where the luminance distribution detecting means 18 of FIG. 22 is used in place of the luminance distribution detecting means 11 of FIG. 1 will be described. The luminance distribution detecting means 14 of FIG. 22 is the same as the luminance distribution detecting means of FIG. 16 or FIG. It can be used instead of 14.
[0119] 図 22に示される輝度分布検出手段 18は、低周波数成分抽出手段 19と、高周波数 成分抽出手段 20と、振幅検出手段 21と、減算器 22と、加算器 23とを有する。  The luminance distribution detecting means 18 shown in FIG. 22 includes a low frequency component extracting means 19, a high frequency component extracting means 20, an amplitude detecting means 21, a subtractor 22, and an adder 23.
[0120] 低周波数成分抽出手段 19は、 AZD出力(図 23 (A)の実線で示される)を受けて その低周波数成分(図 23 (A)の点線 LSで示される)を抽出する。この低周波数成分 は、 AZD出力 Scの局所的平均を表すものと見ることもできる。低周波数成分抽出手 段 19としては通常のデジタル LPFのほかにメディアンフィルタ、ィプシロンフィルタを 用いることもできる。ここで、局所的平均とは、数画素(例えば、水平 3画素 X垂直 3画 素)力も数百画素程度 (例えば水平 30画素 X垂直 30画素)の平均を示す。 [0121] 低周波成分抽出手段 19のカットオフ周波数、周波数応答を調整することで、局所 領域の輝度変化や、画面の大局的乃至全体的な輝度変化を検出することができる。 照明光による輝度変化などは大局的乃至全体的な輝度変化として検出できる。低周 波成分抽出手段 19のカットオフ周波数を調整することで、局所領域のサイズ (面積) を調整し、局所領域サイズに応じた輝度変化を検出することができる。つまり、画面内 に夜間の照明灯のように照明光がスポット光として入射した場合や、室内の蛍光灯の ように、画面全体に均一照明として入射した場合など、状況に応じた輝度変化を検出 することができる。 [0120] The low frequency component extraction means 19 receives the AZD output (indicated by the solid line in Fig. 23 (A)) and extracts the low frequency component (indicated by the dotted line LS in Fig. 23 (A)). This low frequency component can also be viewed as representing the local average of the AZD output Sc. As the low-frequency component extraction means 19, a median filter or epsilon filter can be used in addition to a normal digital LPF. Here, the local average indicates an average of several pixels (for example, horizontal 3 pixels × vertical 3 pixels) and several hundred pixels (for example, horizontal 30 pixels × vertical 30 pixels). [0121] By adjusting the cut-off frequency and frequency response of the low-frequency component extraction means 19, it is possible to detect a luminance change in the local region and a global or overall luminance change of the screen. Changes in luminance due to illumination light can be detected as global or overall luminance changes. By adjusting the cut-off frequency of the low frequency component extraction means 19, the size (area) of the local region can be adjusted, and the luminance change corresponding to the local region size can be detected. In other words, when the illumination light enters the screen as a spot light like a night illumination lamp, or when the whole screen is incident as a uniform illumination like a fluorescent lamp in the room, a change in luminance depending on the situation is detected. can do.
[0122] 低周波成分抽出手段 19のカットオフ周波数、周波数応答を数千画素(例えば、水 平 50画素 X垂直 50画素)から数万画素(例えば、水平 200画素 X垂直 200画素) 相当の輝度変化が検出できる周波数 (画素ピクセル Xピクセルクロック)のように設定 する。  [0122] The cut-off frequency and frequency response of the low-frequency component extraction means 19 vary from several thousand pixels (for example, horizontal 50 pixels x vertical 50 pixels) to tens of thousands of pixels (for example, horizontal 200 pixels x vertical 200 pixels) Set the frequency so that changes can be detected (pixel pixel x pixel clock).
[0123] 高周波数成分抽出手段 20は、 AZD出力 Sc (又は逆階調変換手段)を受けてその 高周波数成分 (図 23 (B) )を抽出する。高周波数成分抽出手段 20としては通常のデ ジタル HPFを用いることができる。  The high frequency component extracting means 20 receives the AZD output Sc (or reverse gradation converting means) and extracts the high frequency component (FIG. 23 (B)). As the high-frequency component extraction means 20, a normal digital HPF can be used.
[0124] 振幅検出手段 21は、高周波数成分抽出手段 20の出力 HSの局所的領域の最大 振幅(振動のピークレベル値(LP)とボトムレベル値(LB)の差の 1Z2) AMを検出す る。この場合、例えば、低周波数成分のピークレベル値と、ボトムレベル値の差の 1Z 2を最大振幅 AMとする。  [0124] The amplitude detection means 21 detects the maximum amplitude (1Z2 of the difference between the peak level value (LP) and the bottom level value (LB)) AM of the local region of the output HS of the high frequency component extraction means 20 The In this case, for example, 1Z2 of the difference between the peak level value of the low frequency component and the bottom level value is set as the maximum amplitude AM.
[0125] なお、局所的領域内の最も高 ヽ値及び最も低!、値をピークレベル値、ボトムレベル 値としても良いが、その代わりに、最も高い値力も n番目(nは自然数)の値 (n番目に 高 、値)をピークレベル値とし、最も低 、値力 n番目の値 (n番目に低 、値)をボトム レベル値としても良い。そのために、例えば、振幅検出手段 21が、図 24に示すように 、ヒストグラム生成部 211と、ピークレベル値検出部 212と、ボトムレベル値検出部 21 3と、演算部 215とを含み、高周波数成分抽出手段 20で抽出された高周波数成分 H Sを示すデータのうち上記の局所的領域内のものについてヒストグラムを生成し、図 4 を参照して説明した方法同様に(図 4の説明で最大レベル値及び最小レベル値を求 めたのと同様にして)ピークレベル値 LP及びボトムレベル値 LBを求め、このようにし て求めたピークレベル値 LPとボトムレベル値 LBの差の 1Z2を演算部 215で求め、 演算の結果を最大振幅 AMとすることとしても良い。なお、上記のうち、ピークレベル 値検出部 212とボトムレベル値検出部 213とでピーク'ボトムレベル値検出手段 214 が構成されている。 [0125] Note that the highest and lowest values in the local region may be the peak level value and the bottom level value. Instead, the highest value force is also the nth (n is a natural number) value. The (nth highest, value) may be the peak level value, and the lowest and nth value (nth lowest, value) may be the bottom level value. For this purpose, for example, the amplitude detection means 21 includes a histogram generation unit 211, a peak level value detection unit 212, a bottom level value detection unit 213, and a calculation unit 215 as shown in FIG. A histogram is generated for the data in the above-mentioned local region among the data indicating the high-frequency component HS extracted by the component extraction means 20, and in the same manner as described with reference to FIG. 4 (the maximum level in the description of FIG. 4). Determine the peak level value LP and bottom level value LB in the same way as the value and minimum level value). 1Z2 of the difference between the peak level value LP and the bottom level value LB obtained in this way may be obtained by the computation unit 215, and the result of the computation may be the maximum amplitude AM. Of the above, the peak level value detecting unit 212 and the bottom level value detecting unit 213 constitute a peak / bottom level value detecting unit 214.
[0126] 減算器 22は、低周波数成分抽出手段 19の出力 LSから振幅検出器 21の出力 AM を減算する。その減算結果が最小レベル値 Xbとして用いられる。  The subtracter 22 subtracts the output AM of the amplitude detector 21 from the output LS of the low frequency component extraction means 19. The subtraction result is used as the minimum level value Xb.
加算器 23は、低周波数成分抽出手段 19の出力 LSと、振幅検出器 21の出力 AM を加算する。その加算結果が最大レベル値 Xaとして用いられる。  The adder 23 adds the output LS of the low frequency component extraction means 19 and the output AM of the amplitude detector 21. The addition result is used as the maximum level value Xa.
[0127] 補正量決定手段 12は、水平カウンタ、垂直カウンタを備えることで、撮像出力が撮 像信号メモリ 5を経て補正手段 6に供給されるタイミングと、補正制御手段 7における オフセット補正量 Kb、利得補正量 Kaの発生のタイミングを対応付けることが可能で ある。  The correction amount determination means 12 includes a horizontal counter and a vertical counter, so that the timing at which the imaging output is supplied to the correction means 6 via the imaging signal memory 5 and the offset correction amount Kb in the correction control means 7 The timing of occurrence of the gain correction amount Ka can be correlated.
[0128] このように、実施の形態 6の輝度分布検出手段 18を用いれば、局所的な平均輝度 の変化を補正することが可能となり、領域をあらかじめ特定する必要がなぐ領域ごと のコントラスト検出結果の連続性を向上すること (即ち、隣接領域間で、コントラストの 検出結果に大きな差が生じな 、ようにすること)が可能となる。  [0128] As described above, by using the luminance distribution detecting means 18 of the sixth embodiment, it is possible to correct the local change in average luminance, and the contrast detection result for each area where it is not necessary to specify the area in advance. It is possible to improve the continuity of the image (that is, to prevent a large difference in the contrast detection result between adjacent regions).
[0129] 例えば図 25に示すように、画面内に明るい領域 A1と暗い領域 A2とがある場合に は、明るい領域 A1の信号は例えば図 11 (B)に符号 Faで示すごとくである力 オフセ ット補正量 Kbにより、低輝度側にシフトして信号 Fbのように変換し、或いはそれに加 えて利得補正量 Kaにより振幅を拡大して信号 Fcのように変換した上で、階調変換を 行うことで、高周波数成分が十分なコントラストで再現される。暗い領域 A2の信号は 、オフセット補正を行なわずにそのまま (或 、は必要に応じて利得補正のみを行!、) 階調変換が行われ、それにより十分なコントラストで再現される。 For example, as shown in FIG. 25, when there are a bright area A1 and a dark area A2 in the screen, the signal of the bright area A1 is, for example, a force offset as indicated by reference numeral Fa in FIG. After shifting to the lower luminance side by the signal correction amount Kb and converting it to the signal Fb, or adding the gain correction amount Ka to increase the amplitude and converting it to the signal Fc, the gradation conversion is performed. By doing so, high frequency components are reproduced with sufficient contrast. The signal in the dark area A2 is subjected to gradation conversion without offset correction (or only gain correction is performed if necessary!), And is reproduced with sufficient contrast.
[0130] 図 25に示すように明る!/、領域 A1の一部に喑 、領域 A2が存在する理由としては、 照明が不均一である場合、被写体の一部が影により照明を受けていない場合、或い は汚れている場合などがある力 そのような場合にも、細かな輝度の変化パターンを 確実に再現することができる。 [0130] As shown in FIG. 25, it is bright! /, And part of the area A1 has the reason that the area A2 exists. If the illumination is uneven, a part of the subject is not illuminated by the shadow. In such a case, a fine brightness change pattern can be reliably reproduced.
[0131] 実施の形態 6の処理を実施することで、例えば、元画像の撮像結果が図 26 (A)及 び (B)に示すようなものであっても、図 26 (C)に示される、補正された画像信号を得 ることができる。ここで、図 26 (A)及び (B)は、実施の形態 1についての図 12 (A)及 び (B)と同様のものである。図 26 (C)は、図 26 (B)の信号に対し本実施の形態によ る補正を加えることにより得られる信号を示す。 [0131] By performing the processing of the sixth embodiment, for example, the imaging result of the original image is changed as shown in FIG. Even if it is as shown in (B), the corrected image signal shown in FIG. 26 (C) can be obtained. Here, FIGS. 26 (A) and (B) are the same as FIGS. 12 (A) and (B) for the first embodiment. FIG. 26 (C) shows a signal obtained by applying the correction according to the present embodiment to the signal of FIG. 26 (B).
実施の形態 6の処理を実施することで、指の端から中央へ変化している暗い色から 明るい色へ変化する低周波の輝度成分を除くことが可能となり、指の凹凸のコントラ ストを図 26 (C)に示すようにはっきりとした信号を出力することができる。  By performing the processing of Embodiment 6, it is possible to remove the low-frequency luminance component that changes from the dark color that changes from the edge of the finger to the center to the light color, and the contrast of the unevenness of the finger is reduced. A clear signal can be output as shown in 26 (C).
図 26 (A)〜(C)は一例であり、コントラストを強調したい信号に対して、同様の効果 を実現することができる。  Figures 26 (A) to 26 (C) are examples, and the same effect can be achieved for signals for which contrast is to be enhanced.
[0132] 上記の実施の形態の補正手段、補正制御手段、階調変換手段、及び画像処理手 段は、少なくともその一部をソフトウェアにより、即ちプログラムされたコンピュータによ り実現することができる。また、以上本発明の実施の形態の補正手段、補正制御手段 、階調変換手段、及び画像処理手段について説明したが、これらの装置に関する説 明により明らかにした補正、補正制御、階調変換、及び画像処理の方法もまた本発 明の一部を成す。 [0132] The correction means, correction control means, gradation conversion means, and image processing means of the above embodiments can be realized at least partially by software, that is, by a programmed computer. Further, the correction means, the correction control means, the gradation conversion means, and the image processing means according to the embodiment of the present invention have been described above, but the correction, correction control, gradation conversion, Image processing methods are also part of this invention.
産業上の利用可能性  Industrial applicability
[0133] 本発明の活用例として、例えば原稿上の文字、人の顔の特徴、血管パターン、指 紋パターンなど、被写体の細部における小さな輝度の変化を検出するのに適した階 調補正手段を有する撮像装置に応用することができる。 As an application example of the present invention, gradation correction means suitable for detecting a small change in luminance in the details of the subject, such as characters on a document, human facial features, blood vessel patterns, fingerprint patterns, etc. The present invention can be applied to an imaging apparatus having the above.

Claims

請求の範囲 The scope of the claims
[1] 固体撮像素子と、  [1] a solid-state image sensor;
前記固体撮像素子カゝら得られる撮像出力に比例した撮像信号カゝら輝度分布とその 最大レベルと最小レベルを検出する輝度分布検出手段と、  A luminance distribution detecting means for detecting a luminance distribution proportional to an imaging output obtained from the solid-state imaging device and a maximum level and a minimum level thereof;
前記撮像信号にオフセット補正又は利得補正の少なくとも一方を行う補正手段と、 前記輝度分布検出手段により検出された輝度分布の最大レベルと最小レベルに基 づいて、前記補正手段から出力される補正撮像信号の変化範囲を拡大するように補 正量を制御する補正量決定手段とを備えたことを特徴とする撮像装置。  A correction unit that performs at least one of offset correction and gain correction on the imaging signal, and a corrected imaging signal output from the correction unit based on the maximum level and the minimum level of the luminance distribution detected by the luminance distribution detection unit An imaging apparatus comprising: a correction amount determining unit that controls a correction amount so as to expand a change range of the image.
[2] 固体撮像素子と、  [2] a solid-state image sensor;
前記固体撮像素子から得られる撮像出力に比例した撮像信号の特徴量から画面 上の被写体の形状を認識し、被写体領域を示す被写体領域信号を出力する被写体 形状認識手段と、  Subject shape recognition means for recognizing the shape of the subject on the screen from the feature amount of the imaging signal proportional to the imaging output obtained from the solid-state imaging device, and outputting a subject region signal indicating the subject region;
前記被写体形状認識手段から出力される被写体領域信号に基づ!ヽて、被写体領 域内の輝度分布を検出する輝度分布検出手段と、  A luminance distribution detecting means for detecting a luminance distribution in the subject area based on the subject area signal output from the subject shape recognizing means;
前記撮像信号にオフセット補正又は利得補正の少なくとも一方を行う補正手段と、 前記輝度分布検出手段により検出された被写体領域内の輝度分布の最大レベル と最小レベルに基づ!/、て、前記補正手段から出力される被写体領域内の補正撮像 信号の変化範囲を拡大するように補正量を制御する補正量決定手段とを備えたこと を特徴とする撮像装置。  Correction means for performing at least one of offset correction and gain correction on the imaging signal; and the correction means based on the maximum level and the minimum level of the luminance distribution in the subject area detected by the luminance distribution detection means! An imaging apparatus comprising: a correction amount determining unit that controls a correction amount so as to expand a change range of the corrected imaging signal in the subject region output from the camera.
[3] 前記補正手段の出力を非線形変換特性で階調変換する階調変換手段を備えたこ とを特徴とする請求項 1に記載の撮像装置。 [3] The imaging apparatus according to [1], further comprising a gradation conversion unit that performs gradation conversion on an output of the correction unit using a nonlinear conversion characteristic.
[4] 前記輝度分布手段は、前記被写体形状認識手段の出力をもとに、被写体と認識さ れた画素領域の場合、前記補正量決定手段を動作させる信号を出力し、あらかじめ 被写体と認識されな!ヽ画素領域の場合、前記補正量決定手段を非動作とする信号 を出力することを特徴とした請求項 2に記載の撮像装置。 [4] The luminance distribution means outputs a signal for operating the correction amount determination means in the case of a pixel region recognized as a subject based on the output of the subject shape recognition means, and is recognized as a subject in advance. 3. The imaging apparatus according to claim 2, wherein, in the case of a pixel region, a signal for inactivating the correction amount determination unit is output.
[5] 前記階調変換手段の非線形変換特性は、前記階調変換手段の入力が第 1の範囲 にあるときは、該入力の変化に対する出力の変化が比較的小さぐ前記階調変換手 段の入力が第 2の範囲にあるときは、該入力の変化に対する出力の変化が比較的大 きいことを特徴とする請求項 3に記載の撮像装置。 [5] The nonlinear conversion characteristic of the gradation conversion means is that the gradation conversion means is such that when the input of the gradation conversion means is in the first range, the change in output relative to the change in the input is relatively small. When the input is within the second range, the change in output relative to the change in input is relatively large. The imaging apparatus according to claim 3, wherein the imaging apparatus is a threshold.
[6] 前記補正量決定手段は、前記輝度分布検出手段の出力に基づいて、前記固体撮 像素子の撮像出力に比例した信号の値の、所定範囲内における最大レベル値及び 最小レベル値を、前記階調変換手段の入力が取り得る値の範囲の最大値及び最小 値に近い値に変換し、前記所定範囲内における前記最大レベル値及び最小レベル 値以外の値を、前記階調変換手段の入力が取り得る値の範囲の最大値及び最小値 の間に均等に割り振った値とするためのオフセット補正量及び利得補正量を決定す ることを特徴とする請求項 3に記載の撮像装置。  [6] The correction amount determination means determines the maximum level value and the minimum level value within a predetermined range of the signal value proportional to the imaging output of the solid-state imaging device based on the output of the luminance distribution detection means. The input value of the gradation conversion means is converted to a value close to the maximum value and the minimum value of the range of values that can be taken, and values other than the maximum level value and the minimum level value within the predetermined range are converted into values of the gradation conversion means. 4. The imaging apparatus according to claim 3, wherein an offset correction amount and a gain correction amount for determining a value evenly allocated between a maximum value and a minimum value in a range of values that can be input are determined.
[7] 前記階調変換手段の出力を、前記階調変換手段の変換特性とは逆の変換特性で 変換する逆階調変換手段を更に有し、  [7] The apparatus further comprises reverse gradation conversion means for converting the output of the gradation conversion means with a conversion characteristic opposite to the conversion characteristic of the gradation conversion means,
前記輝度分布検出手段は、前記逆階調変換手段の出力を、前記固体撮像素子の 撮像出力に比例した信号として輝度分布の検出を行うことを特徴とする請求項 3に記 載の撮像装置。  4. The imaging apparatus according to claim 3, wherein the luminance distribution detection unit detects the luminance distribution using the output of the inverse gradation conversion unit as a signal proportional to the imaging output of the solid-state imaging device.
[8] 前記輝度分布手段は、 1画面の画素数に対する前記固体撮像素子の出力に比例 した信号レベルが、第一の範囲に入る画素数が、あら力じめ決められた設定値以上 の場合、前記補正量決定手段を動作させる信号を出力し、あらかじめ決められた設 定値に満たな!/、場合、前記補正量決定手段を非動作とする信号を出力することを特 徴とした請求項 1に記載の撮像装置。  [8] The luminance distribution means is configured such that the signal level proportional to the output of the solid-state imaging device with respect to the number of pixels of one screen is equal to or greater than a predetermined set value. A signal for operating the correction amount determining means is output, and if the predetermined set value is not satisfied! /, A signal for inactivating the correction amount determining means is output. The imaging apparatus according to 1.
[9] 前記輝度分布検出手段は、前記撮像出力に比例した信号を、輝度レベル別に出 現頻度を検出するヒストグラム生成手段と、前記ヒストグラム生成手段で検出された結 果から、画面の所定範囲内の輝度レベルの最大レベル値及び最小レベル値を求め る最大 ·最小レベル値検出手段とを有することを特徴とする請求項 1に記載の撮像装 置。  [9] The luminance distribution detection means includes a histogram generation means for detecting a frequency of occurrence of a signal proportional to the imaging output for each luminance level, and a result detected by the histogram generation means within a predetermined range of the screen. 2. The image pickup apparatus according to claim 1, further comprising maximum / minimum level value detecting means for obtaining a maximum level value and a minimum level value of the luminance level.
[10] 前記輝度分布検出手段は、前記撮像出力に比例した信号が画像の色を表す成分 を含むものであり、前記撮像出力に比例した信号の輝度成分、又は緑色の成分を表 す信号に基づいて前記輝度分布の検出を行うことを特徴とする請求項 1に記載の撮 像装置。  [10] The luminance distribution detecting means includes a signal that is proportional to the imaging output and includes a component representing an image color, and a signal that represents a luminance component of the signal proportional to the imaging output or a green component. 2. The imaging device according to claim 1, wherein the luminance distribution is detected based on the luminance distribution.
[11] 前記階調変換手段が、第 1の状態では、前記非線形の変換特性で階調変換を行 い、第 2の状態では、線形の変換特性で階調変換を行うものであり、 [11] In the first state, the gradation conversion means performs gradation conversion with the nonlinear conversion characteristic. In the second state, gradation conversion is performed with linear conversion characteristics.
前記輝度分布検出手段は、前記第 2の状態にあるときの前記階調変換手段の出力 を、前記撮像出力に比例した信号として輝度分布の検出を行うことを特徴とする請求 項 1に記載の撮像装置。  The luminance distribution detection unit detects the luminance distribution by using the output of the gradation conversion unit when in the second state as a signal proportional to the imaging output. Imaging device.
固体撮像素子の撮像出力に比例した信号を受けて、オフセット補正及び利得補正 の少なくとも一方を行う補正ステップと、  A correction step of receiving a signal proportional to the imaging output of the solid-state imaging device and performing at least one of offset correction and gain correction;
前記補正ステップによる補正を制御する補正制御ステップと、  A correction control step for controlling correction by the correction step;
前記補正ステップにおける補正の結果得られた補正出力を、非線形変換特性で階 調変換する階調変換ステップとを備え、  A gradation conversion step of gradation-converting a correction output obtained as a result of the correction in the correction step with a nonlinear conversion characteristic;
前記階調変換ステップの非線形変換特性は、前記階調変換ステップに入力として 供給される信号が第 1の範囲にあるときは、該入力の変化に対する出力の変化が比 較的小さぐ前記階調変換ステップの入力として供給される信号が第 2の範囲にある ときは、該入力の変化に対する出力の変化が比較的大きいものであり、  The non-linear conversion characteristic of the gradation conversion step is such that when the signal supplied as an input to the gradation conversion step is in the first range, the change in output relative to the change in the input is relatively small. When the signal supplied as input to the conversion step is in the second range, the change in output relative to the change in input is relatively large,
前記補正制御ステップは、  The correction control step includes
前記固体撮像素子の撮像出力に比例した信号から輝度の分布を検出する輝度分 布検出ステップと、  A luminance distribution detecting step of detecting a luminance distribution from a signal proportional to the imaging output of the solid-state imaging device;
前記輝度分布検出ステップにおける検出の結果に基づいて、前記固体撮像素子 の撮像出力に比例した信号を、仮に前記階調変換ステップにそのまま供給したとす れば、前記第 1の部分に位置する範囲に入る成分が多い場合に、前記固体撮像素 子の撮像出力に比例した信号に対して、オフセット補正を行って、前記第 2の範囲に 入る成分が多くなるようにするための補正量を決定する補正量決定ステップと を備え、  Based on the detection result in the luminance distribution detection step, if a signal proportional to the imaging output of the solid-state imaging device is supplied to the gradation conversion step as it is, the range located in the first portion When there are a large number of components that fall into the second range, offset correction is performed on the signal proportional to the imaging output of the solid-state imaging element to determine a correction amount for increasing the components that fall within the second range. And a correction amount determination step for
前記補正ステップは、前記補正量決定ステップで決定された補正量を用いて前記 固体撮像素子の撮像出力に比例した信号に対する補正を行う  The correction step corrects a signal proportional to the imaging output of the solid-state imaging device using the correction amount determined in the correction amount determination step.
ことを特徴とする撮像方法。  An imaging method characterized by the above.
PCT/JP2006/308713 2005-06-22 2006-04-26 Imaging device and gradation converting method for imaging method WO2006137216A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006519004A JP4279313B2 (en) 2005-06-22 2006-04-26 Imaging apparatus and gradation conversion method in imaging apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005181842 2005-06-22
JP2005-181842 2005-06-22

Publications (1)

Publication Number Publication Date
WO2006137216A1 true WO2006137216A1 (en) 2006-12-28

Family

ID=37570252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/308713 WO2006137216A1 (en) 2005-06-22 2006-04-26 Imaging device and gradation converting method for imaging method

Country Status (2)

Country Link
JP (1) JP4279313B2 (en)
WO (1) WO2006137216A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008032517A1 (en) * 2006-09-14 2008-03-20 Mitsubishi Electric Corporation Image processing device and image processing method, and image pickup device and image pickup method
JP2009064398A (en) * 2007-09-10 2009-03-26 Olympus Corp Cell analysis method, apparatus and program
JP2009117937A (en) * 2007-11-02 2009-05-28 Nikon Corp Image processing method, image processor, imaging apparatus, display device, and program
JP2010130299A (en) * 2008-11-27 2010-06-10 Sony Corp Image signal processing apparatus and method, and program
JP2010141454A (en) * 2008-12-10 2010-06-24 Sanyo Electric Co Ltd Image processing apparatus
JP2011146881A (en) * 2010-01-14 2011-07-28 Hitachi Consumer Electronics Co Ltd Image signal processor
JP2011248413A (en) * 2010-05-21 2011-12-08 Panasonic Electric Works Sunx Co Ltd Image processing apparatus
JP2014192828A (en) * 2013-03-28 2014-10-06 Fujitsu Ltd Image correction device, image correction method, and biometric authentication device
JP2016224983A (en) * 2013-01-25 2016-12-28 ドルビー ラボラトリーズ ライセンシング コーポレイション Global display management based light modulation
WO2018003245A1 (en) * 2016-06-27 2018-01-04 ソニーセミコンダクタソリューションズ株式会社 Signal processing device, imaging device, and signal processing method
JP2018061233A (en) * 2016-10-04 2018-04-12 キヤノン株式会社 Image processing system, image processing method, and program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10607047B2 (en) * 2017-12-06 2020-03-31 Cognex Corporation Local tone mapping for symbol reading

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04340875A (en) * 1991-05-17 1992-11-27 Mitsubishi Electric Corp Image pickup device
JPH09149317A (en) * 1995-11-21 1997-06-06 Matsushita Electric Ind Co Ltd Image pickup device
JP2002077741A (en) * 2000-08-28 2002-03-15 Matsushita Electric Works Ltd Image sensor and its signal processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04340875A (en) * 1991-05-17 1992-11-27 Mitsubishi Electric Corp Image pickup device
JPH09149317A (en) * 1995-11-21 1997-06-06 Matsushita Electric Ind Co Ltd Image pickup device
JP2002077741A (en) * 2000-08-28 2002-03-15 Matsushita Electric Works Ltd Image sensor and its signal processing method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008032517A1 (en) * 2006-09-14 2008-03-20 Mitsubishi Electric Corporation Image processing device and image processing method, and image pickup device and image pickup method
US8154628B2 (en) 2006-09-14 2012-04-10 Mitsubishi Electric Corporation Image processing apparatus and imaging apparatus and method
JP2009064398A (en) * 2007-09-10 2009-03-26 Olympus Corp Cell analysis method, apparatus and program
JP2009117937A (en) * 2007-11-02 2009-05-28 Nikon Corp Image processing method, image processor, imaging apparatus, display device, and program
US8564862B2 (en) 2008-11-27 2013-10-22 Sony Corporation Apparatus, method and program for reducing deterioration of processing performance when graduation correction processing and noise reduction processing are performed
JP2010130299A (en) * 2008-11-27 2010-06-10 Sony Corp Image signal processing apparatus and method, and program
JP2010141454A (en) * 2008-12-10 2010-06-24 Sanyo Electric Co Ltd Image processing apparatus
JP2011146881A (en) * 2010-01-14 2011-07-28 Hitachi Consumer Electronics Co Ltd Image signal processor
JP2011248413A (en) * 2010-05-21 2011-12-08 Panasonic Electric Works Sunx Co Ltd Image processing apparatus
JP2016224983A (en) * 2013-01-25 2016-12-28 ドルビー ラボラトリーズ ライセンシング コーポレイション Global display management based light modulation
JP2014192828A (en) * 2013-03-28 2014-10-06 Fujitsu Ltd Image correction device, image correction method, and biometric authentication device
US9454693B2 (en) 2013-03-28 2016-09-27 Fujitsu Limited Image correction apparatus, image correction method, and biometric authentication apparatus
WO2018003245A1 (en) * 2016-06-27 2018-01-04 ソニーセミコンダクタソリューションズ株式会社 Signal processing device, imaging device, and signal processing method
US10873712B2 (en) 2016-06-27 2020-12-22 Sony Semiconductor Solutions Corporation Signal processing device, imaging device, and signal processing method
JP2018061233A (en) * 2016-10-04 2018-04-12 キヤノン株式会社 Image processing system, image processing method, and program

Also Published As

Publication number Publication date
JP4279313B2 (en) 2009-06-17
JPWO2006137216A1 (en) 2009-01-08

Similar Documents

Publication Publication Date Title
JP4279313B2 (en) Imaging apparatus and gradation conversion method in imaging apparatus
CN107005639B (en) Image pickup apparatus, image pickup method, and image processing apparatus
US8154628B2 (en) Image processing apparatus and imaging apparatus and method
US8711255B2 (en) Visual processing apparatus and visual processing method
KR100776134B1 (en) Image sensor and method for controlling distribution of image brightness
US8305470B2 (en) Imaging device, setting-value changing method, and computer program product
WO2016199573A1 (en) Image processing device, image processing method, program, and image capture device
JP2009303010A (en) Imaging apparatus and imaging method
JP3134784B2 (en) Image synthesis circuit
JP2007082181A (en) Imaging apparatus and image processing method
JP2008072450A (en) Image processor and image processing method
KR20070026571A (en) Image processing device, method, and program
JP4679174B2 (en) Image processing apparatus and digital camera equipped with the image processing apparatus
JP2000152033A (en) Image processor and image processing method
JPH08107519A (en) Image pickup device
JP4024284B1 (en) Imaging apparatus and imaging method
JP5142833B2 (en) Image processing apparatus and image processing method
JP3092397B2 (en) Imaging device
JP3201049B2 (en) Gradation correction circuit and imaging device
JP3748031B2 (en) Video signal processing apparatus and video signal processing method
JPH09149317A (en) Image pickup device
JP2006279812A (en) Brightness signal processor
JP2009081526A (en) Imaging apparatus
JP4550090B2 (en) Image processing apparatus and image processing method
JP4443444B2 (en) Imaging device

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2006519004

Country of ref document: JP

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06745691

Country of ref document: EP

Kind code of ref document: A1