EP1793620A1 - Bildverarbeitungsvorrichtung und -verfahren, abbildungsvorrichtung und computerprogramm - Google Patents

Bildverarbeitungsvorrichtung und -verfahren, abbildungsvorrichtung und computerprogramm Download PDF

Info

Publication number
EP1793620A1
EP1793620A1 EP06767025A EP06767025A EP1793620A1 EP 1793620 A1 EP1793620 A1 EP 1793620A1 EP 06767025 A EP06767025 A EP 06767025A EP 06767025 A EP06767025 A EP 06767025A EP 1793620 A1 EP1793620 A1 EP 1793620A1
Authority
EP
European Patent Office
Prior art keywords
correlation
pixel
interpolation
calculating
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06767025A
Other languages
English (en)
French (fr)
Other versions
EP1793620A4 (de
Inventor
Takuya Sony Corporation Chiba
Ryota Sony Corporation Kosakai
Akira Sony Corporation Matsui
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of EP1793620A1 publication Critical patent/EP1793620A1/de
Publication of EP1793620A4 publication Critical patent/EP1793620A4/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements

Definitions

  • the present invention relates to an image processing device and an image processing method for processing an output signal of a solid-state imaging device that has filters of predetermined color coding (color filters), an imaging apparatus that uses the image processing device or the image processing method, and a computer program.
  • Cameras have a long history as means for recording visual information. Recently, instead of silver salt cameras that perform photographing using a film and a photosensitive plate, digital cameras that digitize images captured by solid-state imaging devices such as a CCD (Charge Coupled Device) and a CMOS (Complementary Metal-Oxide Semiconductor) are widely spread.
  • solid-state imaging devices such as a CCD (Charge Coupled Device) and a CMOS (Complementary Metal-Oxide Semiconductor) are widely spread.
  • CCD Charge Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • An image sensor that uses a solid-state imaging device is constituted by a mechanism in which respective pixels (photodiodes) arranged two-dimensionally convert light into electric charges making use of a photoelectric effect.
  • color filters including three colors of R (red), green (G), and blue (B) are provided on a light-receiving surface thereof to separate incident light into the three primary colors visible by human eyes.
  • signal charges accumulated in association with amounts of incident light through the color filters are read. This makes it possible to obtain RGB image signals visible by human eyes.
  • the RGB image signals are color-space-converted into YUV image signals and used for display output and image recording.
  • YUV represent colors with a luminance signal Y and two chromaticities including a color difference U of red and a color difference V of blue.
  • YUV conversion makes it easy to perform data compression making use of a human visual sensitivity characteristic that resolution for luminance is high and resolution for color is low.
  • color space conversion from RGB to YUV is applied according to the following three expressions in the NTSC (National Television Standards committee) standard.
  • the 3CCD imaging apparatus that obtains high-resolution RGB signals by arranging primary color filters of R (red), G (green), and B (blue) in the same spatial phase with respect to three solid-state imaging devices and realizes a high image quality.
  • the 3CCD imaging apparatus needs to use the three solid-state imaging devices and use a prism for separating incident light into each of colors of R, G, and B. This makes it difficult to realize a reduction in size and a reduction in cost of the apparatus.
  • G is a main component in generating a luminance signal and resolution of the luminance signal substantially depends on resolution of G.
  • resolution of an image signal is proportional to a sampling rate 1/fs of pixels (fs is a sampling frequency of the pixels)
  • fs is a sampling frequency of the pixels
  • the Bayer arrangement it is possible to obtain a characteristic that resolution of G is higher than resolution of R and B by improving the accuracy of color filter arrangement interpolation of G.
  • Human beings have a visual sensitivity characteristic that resolution for luminance is high and resolution for color is low.
  • the Bayer arrangement is color coding that successfully utilizes the human visual sensitivity characteristic.
  • an image processing device that, in applying interpolation processing to RGB image signals subjected to color coding in the Bayer arrangement, calculates amounts of change in eight pixels near a pixel that should be interpolated, that is, eight pixels in total above and below, on the right and the left, and at the upper right, the lower right, the upper left, and the lower left of a pixel of attention, weights the amounts of change calculated to calculate correlation values, determines interpolation coefficients on the basis of the correlation values calculated, multiplies interpolation data by the interpolation coefficients, respectively, and, then, adds up the interpolation data multiplied by the interpolation coefficients (See, for example, Patent Document 1).
  • this image processing device it is possible to satisfactorily judge a degree of correlation even for an edge in a direction not orthogonal to a calculation direction of the correlation values by performing the interpolation processing on the basis of the four pixels of pixel information in accordance with the interpolation coefficients.
  • G is a main component in generating a luminance signal and substantially depends on luminance resolution.
  • R and B substantially depend on resolution of a color difference signal.
  • a point in generating an image having high resolution is to increase the resolution of G.
  • Human eyes have a characteristic that it is possible to recognize a high frequency concerning luminance but it is difficult to recognize a high frequency concerning a color.
  • a balance of color resolution and luminance resolution does not sufficiently match the human visual sensitivity characteristic.
  • the applicant proposes color coding in which luminance resolution is increased to twice as large as that of the Bayer arrangement, although of the color resolution is slightly sacrificed, by arranging G components serving as main components in generating a luminance component to surround respective R and B components and increasing G instead of reducing the number of pixels of R and B to a half compared with that of the Bayer arrangement (see, for example, the specification of Japanese patent application No. 2005-107037 ).
  • Such color coding matches the human visible sensitivity characteristic better than the color coding of the Bayer arrangement.
  • more advanced interpolation processing is necessary in order to realize resolution higher than that in the case of the color coding of the Bayer arrangement.
  • a first aspect of the invention is an image processing device that processes an image imaged by imaging means that has color filters for color coding, characterized by including:
  • the image processing device applies color filter arrangement interpolation to an image signal to which color coding is applied.
  • the color coding uses color filters that have color components serving as main components in calculating a luminance component arranged to surround each of other color components with respect to, for example, a pixel arrangement in which respective pixels are arranged in a square lattice shape at equal intervals in the horizontal direction and the vertical direction.
  • color filters for color coding color components serving as main components in calculating a luminance component are arranged to surround each of other color components with respect to an oblique pixel arrangement in which respective pixels are shifted by 1/2 of a pixel pitch for each row or for each column.
  • the correlation-value calculating means calculates, for pixel signals around the pixel, correlation values indicating intensity of a correlation of G among pixels at least in two or more directions.
  • the filter in this context is preferably a band-pass filter that removes a DC component. It is possible to use a high-pass filter such as a differential filter as the filter.
  • a correlation value S (HV) in the horizontal and vertical directions HV is represented as BPF (H)/ ⁇ BPF (H) +BPF (V) ⁇ .
  • S(HV) takes a value from 0 to 1.
  • S (HV) is close to 1, there is waviness of an image signal in the vertical direction, which indicates that a correlation among pixels is high in the vertical direction.
  • the correlation-value calculating means can calculate the correlation value S(HV) in the horizontal and vertical directions as a first correlation value.
  • the correlation value S(HV) takes a value near 0.5, it is unclear whether a correlation among pixels in a direction of +45 degrees with respect to a vertical axis is high or a correlation among pixels in a direction of -45 degrees with respect to the vertical axis is high. Thus, it is impossible to specify a pixel for interpolation.
  • the correlation-value calculating means further calculates a second correlation value in a directional characteristic different from that of the first correlation value and compensates for an intermediate value taken by the first correlation value to improve resolution for specifying a direction in which a pixel is interpolated in the interpolation-pixel specifying means.
  • the correlation-value calculating means calculates the second correlation value on the basis of outputs of these filters.
  • the correlation value S(HV) in the horizontal and vertical directions HV is calculated as the first correlation value as described above, it is possible to use, as the second correlation value, a correlation value S(NH/NV) in a non-horizontal and non-vertical directions NH/NV calculated on the basis of a band-pass filter output BPF(NH) in the non-horizontal direction NH orthogonal to band-pass filter outputs BPF(NH) and BPF(HN) in the non-horizontal direction NH.
  • S(NH/NV) is represented as BPF(NH)/ ⁇ BPF(NH)+BPF(NV) ⁇ .
  • the first correlation value and the second correlation value can compensate for each other when one of the correlation values is a correlation value near 0.5.
  • resolution for specifying an interpolation direction is maximized.
  • the interpolation-pixel specifying means can judge a direction of correlation over all directions (360°) by comparing plural correlation values calculated by the correlation-value calculating means and specify, in a finer granularity, a direction that in which interpolation should be performed.
  • the interpolating means only has to apply interpolation processing to a pixel to be interpolated on the basis of information on pixels around a pixel present in the direction specified.
  • a second aspect of the invention is an image processing device that processes an image imaged by imaging means that have color filters for color coding, the image processing device characterized by including:
  • the first aspect of the invention it is possible to calculate two or more correlation values having different directional characteristics concerning pixels values around a pixel to be interpolated, judge a direction of correlation over all directions (360°) by comparing these plural correlation values, and specify, in a finer granularity, a direction in which interpolation should be performed.
  • a method of specifying a direction to be interpolated is performed on condition that the correlation values calculated in the respective directional characteristics is reliable.
  • the image processing device calculates reliability concerning the first correlation value and the second correlation value and interpolates a desired pixel signal using an interpolation processing method corresponding to the reliability. For example, it is possible to calculate reliability on the basis of a sum of respective correlation values calculated by the correlation value calculating means.
  • the image processing device judges a direction of correlation over all directions (360 degrees) by comparing the plural correlation values with a correlation line to thereby determine a direction in which interpolation should be performed and perform pixel interpolation with higher accuracy. As a result, it is possible to obtain a luminance signal having high resolution on the basis of a G signal having high resolution.
  • the image processing device performs interpolation using an average of information on pixels around a pixel to be interpolated. In this case, accuracy of the pixel to be interpolated is controlled and it is impossible to obtain a luminance signal having high resolution. On the other hand, it is possible to improve SN according to averaging processing.
  • a third aspect of the invention is a computer program described in a computer-readable format to execute, on a computer, processing for an image imaged by imaging means that has color filters for color coding, the computer program characterized by causing the computer to execute:
  • a fourth aspect of the invention is a computer program described in a computer-readable format to execute, on a computer, processing for an image imaged by imaging means that has color filters for color coding, the computer program characterized by causing the computer to execute:
  • the computer programs according to the third and the fourth aspects of the invention define computer programs described in a computer-readable format to realize predetermined processing on a computer.
  • cooperative actions are shown on the computer. It is possible to obtain operational effects same as those of the image processing devices according to the first and the second aspects of the invention, respectively.
  • the invention in particular, it is possible to highly accurately apply color filter arrangement interpolation of main components for calculating a luminance component to an image signal subjected to color coding that uses color filters on which color components serving as main components in calculating a luminance component are arranged to surround each of other color components in a pixel arrangement in which respective pixels are arranged in a square lattice shape to be arranged at equal intervals in the horizontal direction and the vertical direction.
  • the image processing device can judge a direction of correlation over all directions (360 degrees) with respect to a pixel to be interpolated. Thus, it is possible to perform appropriate interpolation processing on the basis of the direction judged.
  • FIG. 1 An example of a structure of an imaging apparatus that uses an image processing device or an image processing method according to the invention is shown in Fig. 1.
  • the imaging apparatus in this context includes camera modules including a solid-state imaging device serving as an imaging device, an optical system that focuses image light of a subject on an imaging surface (a light-receiving surface) of the solid-state imaging device, and a signal processor for the solid-state imaging device, camera apparatuses such as a digital still camera and a video camera mounted with the camera modules, and electronic apparatuses such as a cellular phone.
  • camera modules including a solid-state imaging device serving as an imaging device, an optical system that focuses image light of a subject on an imaging surface (a light-receiving surface) of the solid-state imaging device, and a signal processor for the solid-state imaging device, camera apparatuses such as a digital still camera and a video camera mounted with the camera modules, and electronic apparatuses such as a cellular phone.
  • an image light from a subject is focused on an imaging surface of an imaging device 12 by an optical system, for example, an imaging lens 11.
  • an optical system for example, an imaging lens 11.
  • a solid-state imaging device in which a large number of pixels including photoelectric conversion elements are two-dimensionally arranged in a matrix shape and color filters including color components serving as main components in creating a luminance component and other color components are arranged on the surface of the pixels, is used as the imaging device 12.
  • the solid-state imaging device that has the color filters may be any one of a charge transfer solid-state imaging device represented by a CCD (Charge Coupled_ Device), an X-Y addressable solid-state imaging device represented by a MOS (Metal Oxide Semiconductor), and the like.
  • CCD Charge Coupled_ Device
  • MOS Metal Oxide Semiconductor
  • color components serving as main components in creating a luminance (Y) component are explained with green (G) as an example and other color components are explained with red (R) and blue (B) as examples.
  • the gist of the invention is not limited to a combination of these color components. It is also possible to use, for example, white, cyan, or yellow as the color components serving as the main components in creating the Y component and use, for example, magenta, cyan, or yellow as the other color components.
  • the light made incident on the respective pixels is photoelectrically converted by a photoelectric conversion element such as a photodiode.
  • the light is read out as analog image signals from the respective pixels, converted into digital image signals by an A/D converter 13, and, then, inputted to a camera signal processor 14, which is the image processing device according to the invention.
  • the camera signal processor 14 includes an optical system correction circuit 21, a White Balance (WB) circuit 22, an interpolation processor 23, a gamma correction circuit 24, a Y (luminance) signal processor 25, a C (Chroma) signal processor 26, a band limiting LPF (Low-Pass Filter) 27, and a thinning-out processor 28.
  • WB White Balance
  • interpolation processor 23 a gamma correction circuit 24
  • Y luminance
  • C Chroma
  • LPF Low-Pass Filter
  • the optical system correction circuit 21 performs correction for the imaging device 12 and the optical system such as digital clamp for adjusting a black level to a digital image signal inputted to the camera signal processor 14, defect correction for correcting a defect of the imaging device 12, and shading correction for correcting a fall in an amount of ambient light of the imaging lens 11.
  • the WB circuit 22 applies processing for adjusting a white balance to an image signal, which has passed the optical system correction circuit 21, to make R, G, and B the same for a white subject.
  • the interpolation processor 23 creates pixels with different spatial phases according to interpolation. In other words, the interpolation processor 23 creates three planes (RGB signals in the same spatial position) from RGB signals with phases shifted from one another spatially. Specific interpolation processing in the interpolation processor 23 is a characteristic of the invention. Details of the interpolation processing will be described later.
  • the gamma correction circuit 24 applies gamma correction to the RGB signals in the same spatial position and, then, supplies the RGB signals to the Y signal processor 25 and the C signal processor 26.
  • the gamma correction is processing for, in order to correctly represent gradations of colors of a subject, multiplying color signals of R, G, and B outputted from the WB circuit 22 by predetermined gains such that the photoelectric conversion characteristics of an entire system including the imaging device 12 and video reproducing means at a later stage thereof to be 1.
  • the Y signal processor 25 generates a luminance (Y) signal according to Expression (1) described above.
  • the C signal processor 26 generates color difference signals Cr(R-Y) and Cb(B-Y) according to Expressions (2) and (3) described above.
  • the band limiting LPF 27 is a filter that has a cut-off frequency of 1/8 of a sampling frequency fs.
  • the band limiting LPF 27 reduces a pass band from (1/2) fs to (1/8) fs for the color difference signals Cr and Cb. This is an output adjusted to a TV signal format.
  • a frequency signal equal to or higher than 1/8 fs is outputted as a false color signal.
  • the thinning-out circuit 28 performs thinning-out for sampling of the color difference signals Cr and Cb.
  • the color (C) signal (the color difference signals Cr and Cb) needs only a band equal to or smaller than 1/4 of a band for the luminance (Y) signal. This is because human eyes have a characteristic that it is possible to recognize a high frequency for luminance but it is difficult to recognize a high frequency for a color (as described above) and a format of a TV signal is also determined accordingly.
  • the interpolation processing is, as described above, color filter arrangement interpolation for generating a signal of a color component, which is missing in a pixel because of intermittent arrangement of color filters, according to interpolation that uses signals of pixels around the pixel, that is, pixels having different spatial phases.
  • the interpolation processing is also called demosaic.
  • This interpolation processing is processing extremely important for obtaining an image having high resolution. This is because, when a pixel cannot be successfully interpolated by the interpolation processing, a false signal is generated to cause deterioration in resolution and occurrence of false colors. If it is impossible to interpolate G components serving as main components of a luminance signal with high accuracy, resolution of the luminance signal is deteriorated.
  • the "correlation processing” is processing for interpolating a pixel to be interpolated using information on pixels in a direction in which correlation is high.
  • the interpolation processing for a pixel is considered with an input image according to a resolution chart shown in Fig. 3 as an example.
  • the resolution chart is a chart in which the center indicates a signal having a low frequency and positions farther apart from the center indicate signals having higher frequencies.
  • signals having the same frequency have various directions. It is possible to analyze what kind of processing is suitable for various signals by inputting a signal of the resolution chart to a signal processor.
  • a subject is a subject of horizontal lines as at an A point on the ordinate, waviness is observed in an image signal in the vertical direction.
  • the pixel is interpolated using pixels in the horizontal direction.
  • the interpolation processing that uses the correlation processing is effective for processing for generating a G signal in a spatial position of a pixel in a spatial position where G is not present, for example, a pixel X in the figure in color coding like the Bayer arrangement in which color components G are arranged in a checkered pattern.
  • the interpolation processing is usually applied not only to the G signal but also to remaining R and B signals.
  • attention is paid to the interpolation processing for the G signal for realizing high resolution for a luminance signal.
  • the interpolation processing is processing for generating a signal of a pixel, which should be interpolated, according to interpolation that uses signals of pixels around the pixel having different spatial phases.
  • a procedure of the interpolation processing depends on a layout of the pixels around the pixel, that is, color coding.
  • interpolation processing for the Bayer arrangement will be explained. As it is evident from Fig. 2, color components G of the Bayer arrangement are arranged in a checkered pattern. The interpolation processing is applied to portions where the color components G are not arranged in the Bayer arrangement (hereinafter referred to as "interpolation pixels").
  • Filtering by a band-pass filter is applied in the horizontal and the vertical directions with the spatial position (the interpolation pixel) X as the center to calculate amplitude of an image signal observed from the horizontal and the vertical directions (i.e., waviness of image signals in the horizontal and the vertical directions).
  • the band-pass filter is a filter that has a peak at 1/4 fs (fs is sampling frequency) and outputs a value with respect to a signal having a frequency up to a frequency near limiting resolution of 1/2 fs, as shown in Fig. 4.
  • an output of the band-pass filter is small, fluctuation in a signal is small, or a signal of a low frequency is present, in a direction in which the output is small.
  • a high-pass filter such as a differential filter.
  • any filter may be used as long as the filter has a band limiting characteristic for removing a DC component.
  • Bpf_H G ⁇ 3 + G ⁇ 8 + 2 ( G ⁇ 4 + G ⁇ 9 ) - ( G ⁇ 5 + G ⁇ 10 )
  • Bpf_V - G ⁇ 1 + G ⁇ 2 + 2 ( G ⁇ 6 + G ⁇ 7 ) - ( G ⁇ 11 + G ⁇ 12 )
  • respective correlation values S_H and S_V in the horizontal and the vertical directions are calculated from the following expressions by using the outputs Bpf_H and Bpf_V of the band-pass filter.
  • the correlation values S_H and S_V in the horizontal and the vertical directions represent intensity of a correlation of image signals among adjacent pixels in the horizontal and the vertical directions.
  • “Correlation” means a fluctuation ratio of a signal.
  • the correlation values S_H and S_V are represented by a ratio of the filter outputs Bpf_H and Bpf_V of the band-pass filters located in the horizontal and the vertical directions, respectively.
  • an interpolation value X of the interpolation pixel X is calculated in accordance with the following expression using G signals of pixels above and below and on the left and the right of the interpolation pixel X.
  • Respective limiting resolutions of G resolution and RB resolution in the Bayer arrangement are shown in Fig. 5.
  • the resolution of G is 1/2 fs in the horizontal and the vertical directions and is (1/2 ⁇ 2) fs in the oblique 45 degree direction.
  • the resolution of G and the resolution of RB are the same in the oblique 45 degree direction.
  • G is a main component in creating a luminance signal and substantially depends on luminance resolution and RB substantially depends on resolution of a color difference signal. Therefore, an increase in resolution for G is important in generating an image having high resolution. Human eyes have a characteristic that it is possible to recognize a high frequency for luminance but it is difficult to recognize a high frequency for a color. Thus, in the Bayer arrangement, a balance between color resolution and luminance resolution does not match the human visual sensitivity characteristic.
  • the applicant proposes color coding in which G components serving as main components in creating a luminance component are arranged to surround respective RB components (as described above). According to this color coding, color resolution is slightly sacrificed but luminance resolution is improved to be about twice as large as that of the Bayer arrangement by increasing G instead of halving the number of pixels of R and B compared with the Bayer arrangement.
  • the inventors consider that the color coding in which the RB components are surrounded by the G components matches the human visual sensitivity characteristic better than the color coding of the Bayer arrangement.
  • the color coding in the band described above is more desirable.
  • interpolation processing more advanced than that in the case of the color coding of the Bayer arrangement is required.
  • color coding in which components serving as main components in creasing a luminance component, for example, G, are arranged to surround respective components of R and B.
  • Interpolation processing for the respective examples of color coding will be explained as a first embodiment and a second embodiment, respectively.
  • FIG. 6 A diagram showing a color coding example be subjected to interpolation processing according to a first embodiment of the invention is shown in Fig. 6.
  • pixels are arranged in a square lattice shape such that the pixels are arranged at equal intervals (pixel pitches) d in the horizontal direction (a row direction along pixel rows) and the vertical direction (a column direction along pixel columns).
  • pixel pitches a first row
  • pixels are arranged in repetition of RGBG with four pixels in the horizontal direction as a unit.
  • a second row only pixels of G are arranged.
  • pixels are arranged in repetition of BGRG with four pixels in the horizontal direction as a unit.
  • a fourth row only pixels of G are arranged.
  • color components serving as main components in creating a luminance (Y) component and other color components (in this example, R and B) are arranged such that the color components G surround the color components R and B.
  • the color components R and B are arranged at intervals of 4d in the horizontal and the vertical directions.
  • the color components R and B are arranged in every other column (in the first embodiment, odd number columns) and every other row (in a second embodiment described later, odd number rows) such that sampling rates in the horizontal and the vertical directions are rates that are half the sampling rate for G.
  • a sampling rate for G is d and a sampling rate for R and B is 2d and resolution in the horizontal and the vertical directions of the color components G is twice as large as that of the color components R and B.
  • a sampling rate for G is d/2 ⁇ 2 and a sampling rate for R and B is 2d/ ⁇ 2.
  • a spatial frequency characteristic will be considered.
  • the sampling rate for G is d
  • the sampling rate for G is d/2 ⁇ 2
  • the spatial frequency characteristic of the color components R and B will be considered in the same manner. However, since intervals of pixel arrangement of R and B are the same and can be considered in the same way, only the component R will be described below.
  • the sampling rate for R is 2d, it is possible to catch a R signal having a frequency up to 1/4 fs on the basis of the sampling theorem.
  • the sampling rate for R is d/2 ⁇ 2, it is possible to catch a signal having a frequency up to (1/4 ⁇ 2) fs on the basis of the sampling theorem.
  • a spatial frequency characteristic in the color coding example shown in Fig. 6 is shown in Fig. 7.
  • the color components G can catch a signal having a frequency up to (1/2) fs in the horizontal and the vertical directions.
  • the color components R and B can catch a signal having a frequency up to (1/ ⁇ 2) fs in the oblique 45 degree direction.
  • the color components R and B can catch a signal having a frequency up to (1/4) fs in the horizontal and the vertical direction.
  • the color components G can catch a signal having a frequency up to (1/4 ⁇ 2) fs in the oblique 45 degree direction.
  • the G limiting resolution is substantially improved from that in the Bayer arrangement by using the color coding example shown in Fig. 6.
  • resolution of a luminance signal including G signals as main components is about twice as large as that of the Bayer arrangement.
  • G signals are present in eight pixels G in total in the horizontal, the vertical, and the oblique directions surrounding the interpolation pixel X, that is, pixels G4, G5, G6, G8, G9, G11, G12, and G13. Interpolation processing for the G signal of the interpolation pixel X is performed using these G signals.
  • the pixels G are only present in the vertical and the horizontal four directions.
  • the eight pixels are present in the horizontal, the vertical, and the oblique directions.
  • a correlation between the interpolation pixel X and the pixels around the interpolation pixel X is calculated not only in the horizontal and the vertical directions but also in the oblique directions.
  • the interpolation processing is performed while it is judged which pixels should actually be used for interpolation from a relation between the correlation in the horizontal and the vertical directions and the correlation in the oblique directions.
  • FIG. 10 A procedure of the interpolation processing carried out in the interpolation processor 23 is shown in Fig. 10 in a form of a flowchart.
  • the horizontal direction is described as an H direction
  • the vertical direction is described as a V direction
  • an axial direction rotated to the right by 45 degrees with respect to the H direction is described as an NH direction
  • an axial direction rotated to the left by 45 degrees with respect to the H direction is described as an NV direction.
  • the interpolation processing will be hereinafter explained in detail with reference to Fig. 10.
  • the pixel X in Fig. 8 is set as an interpolation pixel (a pixel to be interpolated) (step S11).
  • correlation values in the H and the V directions are calculated for respective pixels around this interpolation pixel X (step S12) .
  • correlation values in the H and the V directions are calculated by applying filtering by a band-pass filter in the H and the V directions with the pixel G4 on the obliquely upper left of the interpolation pixel X as the center.
  • a frequency characteristic (a filter characteristic) of the band-pass filter in the H and the V directions is shown in Fig. 11.
  • Bpf_H_G ⁇ 4 - G ⁇ 3 + 2 ⁇ G ⁇ 4 - G ⁇ 5
  • Bpf_V_G ⁇ 4 - G ⁇ 1 + 2 ⁇ G ⁇ 4 - G ⁇ 8
  • a correlation value S_H_G4 in the H direction for the pixel G4 is calculated in accordance with the following expression.
  • a correlation value S_H_G4 in the H direction indicates a ratio of filter outputs Bpf_H and Bpf_V in the horizontal and the vertical directions of band-pass filters that have the same filter characteristic.
  • correlation values S_H_G6, S_H_G11, and S_H_G13 in the H direction with the pixel G6 on the obliquely upper right, the pixel G11 on the obliquely lower left, and the pixel G13 on the obliquely lower right as the centers, respectively, are also calculated.
  • the correlation values S_H_G4, S_H_G6, S_H_G11, and S_H_G13 in the H and the V directions with the four pixels G4, G6, G11, and G13 surrounding the interpolation pixel X as the centers are calculated (see Fig. 26).
  • two correlation values suitable for substituting the interpolation pixel X are selected out of the four correlation values S_H_G4, S_H_G6, S_H_G11, and S_H_G13 calculated. Specifically, two correlation values having a highest correlation among the four correlation values, that is, two correlation values calculated having highest reliability, are adopted as correlation values of the interpolation pixel X (step S13).
  • the correlation values S_H_G4, S_H_G6, S_H_G11, and S_H_G13 calculated in step S12 are correlation values in the H and the V directions with the four pixels G4, G6, G11, and G13 around the interpolation pixel X as the centers, respectively, and are not correlation values S_H in the H and the V directions with the desired interpolation pixel X as the center.
  • it is impossible to form a band-pass filter with the interpolation pixel X as a center it is impossible to directly calculate a correlation value in the interpolation pixel X.
  • step S13 substitute, on the basis of an idea that correlation values are substantially equal in adjacent pixels, a correlation value in the H and the V directions having high reliability calculated in the adjacent pixels for the correlation value S H in the H and the V directions of the interpolation pixel X.
  • a correlation reliability value Bpf_Max of the pixel G4 is calculated from the following expression.
  • Bpf_Max Bpf_H_G ⁇ 4 +
  • This calculation of the correlation reliability value Bpf_Max is applied to four points of the other three pixels G6, G11, and G13.
  • an output of the band-pass filter for a pixel is large, it can be said that a signal having large amplitude is present around the pixel and the signal is generated by an image rather than by noise.
  • an output of the band-pass filter is small, a signal is buried under noise and reliability of a correlation is low. Thus, it is difficult to trust a correlation value.
  • the respective correlation reliability values Bpf_Max calculated in the pixels at the four points around the interpolation pixel X are compared to select pixels at two points having larger values around the interpolation pixel X.
  • Correlation values in the H and the V directions calculated with these pixels at the two points around the interpolation pixel X as the centers are selected as values substituting for the correlation value in the H and the V directions with the interpolation pixel X as the center.
  • a state in which S_H_G4 and S_H_G13 are selected as reliable correlation values is shown in Fig. 27.
  • that is, a large difference value of filter outputs may be selected instead of correlation values with a large correlation reliability value Bpf_Max, that is, a large total value of filter outputs.
  • Bpf_Max that is, a large total value of filter outputs.
  • the correlation values in the higher order two places adopted are averaged to obtain one correlation value (step S14).
  • the correlation value averaged is treated as a correlation value in the H and the V directions with the interpolation pixel X as the center in the following processing steps. For example, when reliability of correlation values in two places of G4 and G6 is selected because the reliability is high, correlation values in the H and the V directions with these two points as the centers are averaged and regarded as a correlation value in the H and the V directions for the interpolation pixel X (see Fig. 28).
  • the NH direction is an axial direction rotated to the right by 45 degrees with respect to the H direction.
  • the NV direction is an axial direction rotated to the left by 45 degrees with respect to the H direction (as described above) .
  • correlation values in the NH and the NV directions are calculated for respective pixels around the interpolation pixel X (step S15) .
  • filtering by the band-pass filter is applied in the oblique directions with the pixel G5 above the interpolation pixel X as the center to calculate correlation values in the NH and the NV directions.
  • a frequency characteristic (a filter characteristic) of the band-pass filters in the NH and the NV directions is shown in Fig. 12.
  • Bpf_NH_G ⁇ 5 - G ⁇ 1 + 2 ⁇ G ⁇ 5 - G ⁇ 9
  • Bpf_NV_G ⁇ 5 - G ⁇ 2 + 2 ⁇ G ⁇ 5 - G ⁇ 8
  • a correlation value S_NH_G5 in the NH direction for the pixel G5 is calculated from the following expression.
  • a correlation value S_NH_G5 in the NH direction represents a ratio of filter outputs Bpf_NH and Bpf_NV in the NH and the NV directions of band-pass filters having the same filter characteristic.
  • correlation values S_NH_G8, S_NH_G9, and S_NH_G12 in the NH direction with the pixel G8 on the left, the pixel G9 on the right, and the pixel G12 below the interpolation pixel X as the centers, respectively, are also calculated.
  • the correlation values S_NH_G5, S_NH_G8, S_NH_G9, and S_NH_G12 in the NH direction with the four pixels G5, G8, G9, and G12 at the four points above, on the left, on the right, and below the interpolation pixel X as the centers, respectively, are calculated (see Fig. 29).
  • two correlation values substituted for the interpolation pixel X are selected out of the four correlation values S_NH_G5, S_NH_G8, S_NH_G9, and S_NH_G12 calculated. Specifically, two correlation values having higher reliability among the four correlation values are adopted as correlation values of the interpolation pixel X (step S16).
  • the respective correlation values S_NH_G5, S_H_G8, S_H_G9, and S_H_G12 calculated in step S15 are correlation values in the H and the V directions with the four pixels G5, G8, G9, and G12 around the interpolation pixel X as the centers, respectively, and are not correlation values with the desired interpolation pixel X as the center.
  • it is impossible to form a band-pass filter with the interpolation pixel X as a center it is impossible to directly calculate a correlation value in the interpolation pixel X.
  • step S16 substitute, on the basis of an idea that correlation values are substantially equal in adjacent pixels, a correlation value in the NH direction having high reliability calculated in the pixels around the interpolation pixel X for the correlation value in the NH direction of the interpolation pixel X.
  • Reliability for selecting a correlation value is represented using an output value of a band-pass filter calculated in the process for calculating the four correlation values.
  • a correlation reliability value Bpf_Max of the pixel G5 is calculated from the following expression.
  • the correlation reliability value Bpf_Maxis is calculated for the other pixels G8, G9, and G12 at the three points around the interpolation pixel X in the same manner.
  • the correlation reliability value Bpf_Max calculated from a sum of absolute values of outputs of band-pass filters in the horizontal and the vertical orthogonal two directions is large, it is possible to estimate that reliability of a correlation value calculated from the outputs of these band-pass filters is high (as described above).
  • Bpf_Max Bpf_NH_G ⁇ 5 +
  • step S17 The correlation values in the higher order two places adopted are averaged to obtain one correlation value (step S17) .
  • a state in which S_NH_G5 and S_NH_G9 are selected as reliable correlation values is shown in Fig. 30.
  • the correlation value averaged is treated as a correlation value in the NH and the NV directions with the interpolation pixel X as the center in the following processing steps. For example, when reliability of correlation values in two places of G5 and G9 is selected because the reliability is high, correlation values in the H and the V directions with these two points as the centers are averaged and regarded as a correlation value in the NH and the NV directions with the interpolation pixel X as the center (see Fig. 31).
  • a correlation value S_H in the H direction and a correlation value S_NH in the NH direction for the interpolation pixel X are calculated (see Figs. 28 and 31).
  • step S18 a direction of pixels around the interpolation pixel X with which the interpolation pixel X has a strong correlation is judged, that is, directional properties of a correlation are judged (step S18).
  • the correlation values S_H and S_NH calculated in the H direction and the NH direction, respectively, for the interpolation pixel X and a degree of correlation between the interpolation pixel X and pixels surrounding the interpolation pixel X will be considered with an input image formed by a resolution chart shown in Fig. 13 as an example.
  • the resolution chart is a chart in which the center indicates a signal having a low frequency and positions farther apart from the center indicate signals having higher frequencies.
  • signals having the same frequency have various directions. Therefore, it is possible to analyze what kind of processing is suitable for various signals by inputting signals in the resolution chart into a signal processor.
  • a frequency chart representing a relation of correlation values between a spatial phase of an interpolation pixel in the case in which the resolution chart shown in Fig. 13 is set as an input image and pixels around the interpolation pixels in the H and the V directions and the NH and the NV directions is shown in Fig. 14.
  • a straight line in the figure representing the relation of correlation values between the interpolation pixel and the pixels around the interpolation pixel in the H and the V directions and the NH and the NV directions is also referred to as a "correlation line”. It is possible to obtain this correlation line by calculating at least two patterns of correlation values in different directions and plotting the correlation values of the at least two patterns on straight lines of various angles.
  • an alternate long and short dash line (A) is equivalent to the correlation value S_H in the H and the V directions and an alternate long and two short dashes line (B) is equivalent to the correlation value S_NH in the NH and the NV directions. Phases of the correlation lines (A) and (B) are shifted by 45 degrees.
  • the two band-pass filters orthogonal to each other rotated by 45 degrees as a predetermined angle are used.
  • an angle formed by the H and the V directions and the NH/NV directions may be other angles such as 40 degrees to 50 degrees as long as an effect of the same degree as above is realized.
  • step S18 it is possible to find what degree of correlation the interpolation pixel has with the pixels around the interpolation pixel in the H and the V directions and the NH and the NV directions, respectively, that is, directional properties in which a correlation is strong by comparing the correlation values indicated by the alternate long and short dash line (A) and the alternate long and short two dashes line (B) in the special phase in which the interpolation pixel X on the frequency chart shown in Fig. 14 is located.
  • step S18 When directional properties of a correlation of the interpolation pixel X is found in step S18, subsequently, it is judged whether the correlation values S_H and S_NH calculated in steps S12 to S14 and steps S15 to S17 have reliability (step S19).
  • steps S12 to S14 and steps S15 to S17 correlation values in the respective directions with the interpolation pixel X as the center are not directly calculated. Correlation values having high reliability among correlation values in the H direction and the NH direction calculated with the respective pixels surrounding the interpolation pixel X as the center are averaged and substituted for a correlation value of the interpolation pixel X. In such a case, it is particularly important to check reliability of a correlation.
  • the frequency chart in the case in which the resolution chart shown in Fig. 13 is set as an input image, that is, the relation of correlation values between the interpolation pixel and the pixels around the interpolation pixel in the H and the V directions and the NH and the NV directions is as shown in Fig. 14. It is possible to arrange two pairs of band-pass filters orthogonal to each other in the horizontal and the vertical directions and the directions rotated by 45 degrees with respect to the horizontal and the vertical directions, respectively, and calculate the correlation value S_H in the H direction and the correlation value S_NH in the NH direction from outputs of the respective pairs of band-pass filters. Phases of the correlation lines (A) and (B) of S_H and S_NH are shifted by 45 degrees from each other.
  • a correlation line formed by an absolute value calculated by deducting 0.5 from S_H (the correlation line (A) in Fig. 14) and a correlation line formed by an absolute value calculated by deducting 0.5 from S_NH (the correlation line (B) in Fig. 14) are plotted, respectively.
  • the correlation values S_H and S_NH are ideal correlation values on the correlation line shown in Fig. 14, that is, correlation values having high reliability
  • a sum of the absolute value calculated by deducting 0.5 from S_H and the absolute value calculated by deducting 0.5 from S_NH is a value close to 0.5.
  • TH1 and TH2 are values close to 0.5 that satisfy the relation of TH1 ⁇ TH2 (when TH1 and TH2 are set close to 0.5, the judgment condition is made stricter).
  • step S19 When it is judged in step S19 that the correlation values S_H and S_NH have reliability, a pixel that interpolates the interpolation pixel X is extracted from the pixels around the interpolation pixel X in the direction in which the interpolation pixel X is judged as having a strong correlation in step S18 and pixel interpolation is performed (step S20).
  • FIG. 15 A relation between directional properties of a correlation of the interpolation pixel X and the pixels around the interpolation pixel X used for interpolation is shown in Fig. 15.
  • a point
  • linear interpolation is performed.
  • the linear interpolation is based on a relation between a correlation value and an interpolation value for, for example, interpolating the interpolation pixel X by weighting an interpolation value at the (b) point and an interpolation value at the (f) point. For example, although there is no pixel around the interpolation pixel X that is right on a direction equivalent to a P point in Fig.
  • the correlation value S_NH calculated from outputs of the two band-pass filters orthogonal to each other rotated by 45 degrees with respect to the horizontal and the vertical axes is also used.
  • the correlation value S_H in the H and the V directions is exactly 0.5, it is impossible to specify directional properties of the correlation only with S_H.
  • the correlation value S_NH in the NH and the NV directions is shifted by 45 degrees with respect to S_H (see Fig.
  • the color coding for surrounding the RB components with the G components is performed (see Fig. 6).
  • the interpolation processing it is possible to cope with interpolation in the oblique directions.
  • a processing method is adaptively changed according to reliability. For example, reliability of a correlation is evaluated in step S19 and, when the correlation is reliable, resolution-oriented interpolation is performed from a direction of the correlation. However, when the correlation is not reliable, S/N-oriented interpolation is performed. Thus, it is possible to realize accurate interpolation processing.
  • the resolution-oriented interpolation only has to be adaptively changed to the S/N-oriented interpolation.
  • FIG. 16 An example of a structure of an interpolation processor 23A of a hardware configuration that executes the interpolation processing according to the first embodiment is shown in Fig. 16.
  • a G4HV-direction-correlation-value calculating circuit 31 calculates a correlation value in the H and the V directions by applying filtering processing to the H and the V directions with the pixel G4 on the obliquely upper left of the interpolation pixel X as the center. For example, it is possible to constitute the G4HV-direction-correlation-value calculating circuit 31 using a band-pass filter that has the frequency characteristic shown in Fig. 11. Specifically, the G4HV-direction-correlation-value calculating circuit 31 calculates a correlation value S_H_G4 in the H direction for the pixel G4 from the following expression.
  • the G4HV-direction-correlation-value calculating circuit 31 further calculates a correlation value S_V_G4 in the V direction from the following expression.
  • respective HV-direction-correlation-value calculating circuits 32, 33, and 34 calculate correlation values S_H_G6, S_H_G11, and S_H_G13 in the H direction and correlation values S_V_G6, S_V_G11, and S_V_G13 in the V direction with the pixel G6 on the obliquely upper right, the pixel G11 on the obliquely lower left, and the pixel G13 on the obliquely lower right of the interpolation pixel X as the centers, respectively.
  • Processing equivalent to step S12 of the flowchart shown in Fig. 10 is realized by the respective correlation-value calculating circuits 31 to 35.
  • a selection circuit 35 selects, for each of the H direction and the V direction, a correlation value applied to the interpolation pixel X out of four correlation values. Specifically, the selection circuit 35 compares output values of the band-pass filter calculated in the course of calculating correlation values in the respective correlation-value calculating circuits 31 to 34 and adopts two correlation values having a largest correlation reliability value Bpf_Max, that is, two correlation values having highest reliability, as correlation values of the interpolation pixel X out of the four correlation values. Processing equivalent to step S13 of the flowchart shown in Fig. 10 is realized by the selection circuit 35.
  • An average calculating circuit 36 calculates an average of the correlation values in the higher order two places selected by the selection circuit 35 and outputs the average as one correlation value S_H and S_V from the H and the V directions, respectively. Processing equivalent to step S14 of the flowchart shown in Fig. 10 is realized by the average calculating circuit 36.
  • a G5NH-and-NV-direction-correlation-value calculating circuit 37 calculates correlation values in the NH and the NV directions by applying filtering processing to the NH and the NV directions orthogonal to each other with the pixel G5 above the interpolation pixel X as the center. For example, it is possible to constitute the GSNH-and-NV-direction-correlation-value calculating circuit 37 using a band-pass filter that has the frequency characteristic shown in Fig. 12. Specifically, the GSNH-and-NV-direction-correlation-value calculating circuit 37 calculates a correlation value S_NH_G5 in the NH direction for the pixel G5 from the following expression.
  • the G5NH-and-NV-direction-correlation-value calculating circuit 37 calculates a correlation value S_NV_G5 in the NV direction from the following expression.
  • respective NH-and-NV-direction-correlation-value calculating circuits 38, 39, and 40 calculate correlation values S_NH_G8, S_NH_G9, and S_NH_G12 in the NH direction and correlation values S_NV_G8, S_NV_G9, and S_NV_G12 in the NV direction with the pixel G8 on the left, the pixel G9 on the right, and the pixel G12 below the interpolation pixel X as the centers, respectively.
  • Processing equivalent to step S15 of the flowchart shown in Fig. 10 is realized by the respective correlation-value calculating circuits 37 to 40.
  • a selection circuit 41 selects, for each of the NH direction and the NV direction, a correlation value applied to the interpolation pixel X out of four correlation values. Specifically, the selection circuit 41 compares output values of the band-pass filter calculated in the course of calculating the four correlation values and adopts two correction values having a largest correlation reliability value Bpf_Max, that is, two correlation values having highest reliability among the four correlation values as correlation values of the interpolation pixel X. Processing equivalent to step S16 of the flowchart shown in Fig. 10 is realized by the selection circuit 41.
  • An average calculating circuit 42 calculates an average of the correlation values in the higher order two places selected by the selection circuit 41 and outputs the average as one correlation value S_NH and S_NV from the NH and the NV directions; respectively. Processing equivalent to step S17 of the flowchart shown in Fig. 10 is realized by the average calculating circuit 42.
  • a comparator 43 calculates directional properties with a strong correlation of the interpolation pixel X, or calculates in which direction the interpolation pixel X has a strong correlation with the pixels around the interpolation pixel X. Specifically, the comparator 43 specifies directional properties of a correlation by comparing one correlation value S_H and S_V in each of the H and the V directions calculated by the average calculating circuit 36 and the correlation values S_NH and S_NV in each of the NH and the NV directions calculated by the average calculating circuit 42 with the correlation line diagram in Fig. 14. Processing equivalent to step S18 of the flowchart shown in Fig. 10 is realized by the comparator 43.
  • a judging circuit 44 judges, concerning a result of the calculation by comparator 43, that is, directional properties with a strong correlation, whether there is reliability of the correlation. Specifically, when the two correlation values S_H and S_NH calculated by the average calculating circuits 36 and 42, respectively, are on the two correlation straight lines (A) and (B) shown in Fig. 14, the judging circuit 44 considers that there is a correlation. When these two correlation values S_H and S_NH are not on the two correlation straight lines (A) and (B), the judging circuit 44 considers that there is no correlation. A result of the judgment by the judging circuit 44 is supplied to an interpolation circuit 45. Processing equivalent to step S19 of the flowchart shown in Fig. 10 is realized by the judging circuit 44.
  • the interpolation circuit 45 includes a first interpolation circuit 451 that applies resolution-oriented interpolation processing to the interpolation pixel X and a second interpolation circuit 452 that applies S/N-oriented interpolation processing to the interpolation pixel X.
  • the interpolation circuit 45 adaptively entrusts one of the interpolation circuits 451 and 452 with interpolation processing according to reliability of a correlation supplied from the judging circuit.
  • the first interpolation circuit 451 interpolates the interpolation pixel X using pixels in a direction having correlation according to a judgment result that reliability of a correlation is high from the judging circuit 44.
  • the interpolation processing performed using pixels in a direction in which there is a correlation is the resolution-oriented interpolation processing. Processing equivalent to step S20 of the flowchart shown in Fig. 10 is realized by the first interpolation circuit 451.
  • the second interpolation circuit 452 interpolates the interpolation pixel X using an average of pixels around the interpolation pixel X according to a judgment result that reliability of a correlation is low from the judging circuit 44.
  • the second interpolation circuit 452 interpolates the interpolation pixel X in accordance with the following expression using image signals of four near pixels around the interpolation pixel X.
  • the interpolation processing performed using an average of pixels around the interpolation pixel X in this way is the S/N-oriented interpolation processing. Processing equivalent to step S21 of the flowchart shown in Fig. 10 is realized by the second interpolation circuit 452.
  • the respective HV-direction-correlation-value calculating circuits 31 to 34 and the respective NH-and-NV-direction-correlation-value calculating circuits 37 to 40 are constituted using the band-pass filter.
  • a component constituting the correlation value calculating circuits is not limited to the band-pass filter.
  • G serving as a main component in creating a luminance component is arranged in eight pixels in the horizontal, the vertical, and the oblique directions with respect to the interpolation pixel X, that is, the pixels G4, G5, G6, G8, G9, G11, G12, and G13 to realize high luminance resolution compared with the Bayer arrangement in the past.
  • plural correlation values are calculated for a solid-state imaging device that has filters for such color coding and compared with a correlation line diagram. This makes it possible to judge correlation properties for all directions (360°). In other words, it is possible to more highly accurately subject a G signal to interpolation processing with respect to the color coding shown in Fig. 6 and obtain a luminance signal having high resolution compared with that of the Bayer arrangement in the past.
  • a method for interpolation is changed depending on whether reliability of a correlation is high or low.
  • the resolution-oriented interpolation is performed using information on pixels in a direction in which reliability of a correlation is high.
  • the S/N-oriented interpolation is performed using a value obtained by averaging information on pixels around the correction pixel. This makes it possible to realize high-performance interpolation processing with higher resolution and resistive to S/N.
  • a color coding example to be subjected to interpolation processing according to a second embodiment of the invention is shown in Fig. 17.
  • respective pixel pitches in the horizontal and the vertical directions are set to ⁇ 2d and respective pixels are shifted by 1/2 of the pixel pitches ⁇ 2d for each row and each column (pixels are shifted by 1/2 of the pixel pitches in the horizontal direction in odd number rows and even number rows and shifted by 1/2 of the pixel pitches in the vertical directions in odd number columns and even number columns).
  • a first row is a GR line in which G and R are alternately arranged
  • a second row is a G line in which only G is arranged
  • a third row is a GB line in which B and G are alternately arranged
  • a fourth row is a G line in which only G is arranged.
  • color components (in this example, G) serving as main components in creating a luminance (Y) component and other color components (in this example, R and B) are arranged such that the color components G surround the color components R and B.
  • R and B are arranged at intervals of 2 ⁇ 2d in the horizontal and the vertical directions.
  • This color coding is exactly color coding in which the color arrangement in the color coding example shown in Fig. 6 is inclined 45 degrees.
  • a sampling rate for G is d/ ⁇ 2 and a sampling rate for R and B is 2 ⁇ 2d.
  • R and B are arranged in every other column (in this embodiment, odd number columns) and every other row (in this embodiment, odd number rows) such that sampling rates in the horizontal and the vertical directions are rates that are 1/4 of the sampling rate for G. Therefore, resolution in the horizontal and the vertical directions of G is four times as large as that of the color components R and B.
  • a sampling rate for G is d and a sampling rate for R and B is 2d.
  • a spatial frequency characteristic will be considered.
  • the sampling rate for G is d ⁇ 2
  • the sampling rate for G is d, it is possible to catch a G signal having a frequency up to (1/4) fs according to the sampling theorem.
  • the color components R and B will be considered in the same manner. However, since intervals of pixel arrangement of R and B are the same, only R will be described here. Concerning a spatial frequency characteristic of R, in the horizontal and the vertical directions, since the sampling rate for R is 2 ⁇ 2d, it is possible to catch an R signal having a frequency up to (1/4 ⁇ 2) fs according to the sampling theorem. In the oblique 45 degree direction, since the sampling rate for R is 2d, it is possible to catch an R signal having a frequency up to (1/2) fs according to the sampling theorem.
  • a solid-state imaging device that has the oblique pixel arrangement can obtain high resolution because pixel pitches are narrow compared with the pixel arrangement of the square lattice shape.
  • resolution is the same as resolution of the pixel arrangement of the square lattice shape, it is possible to arrange pixels at pixel pitches wider than the pixel pitches of the pixel arrangement of the square lattice shape.
  • openings of the pixels wide. As a result, it is possible to improve S/N.
  • the second embodiment is characterized by interpolation processing for the color coding example shown in Fig. 17.
  • the interpolation processing will be hereinafter specifically explained.
  • Fig. 18 is a diagram representing the color coding example shown in Fig. 17 as a square lattice. It is seen that the arrangement of G in the color coding example shown in Fig. 6 and an arrangement of G in Fig. 18 are in a relation in which the one arrangement is rotated from the other arrangement by 45 degrees. Consequently, it could be seen that it is possible to accurately interpolate G pixels in spatial positions of R and B pixels by performing the interpolation processing according to the first embodiment in a positional relation rotated by 45 degrees with respect to the color coding example shown in Fig. 17.
  • FIG. 19 An example of a structure of an interpolation processor 23B that executes the interpolation processing according to the second embodiment is shown in Fig. 19.
  • the interpolation processor 23B according to this embodiment has a two-stage structure including a pre-stage interpolation processor 231 and a post-stage interpolation processor 232.
  • the pre-stage interpolation processor 231 basically performs interpolation processing in the same manner as the first embodiment. Consequently, the pre-stage interpolation processor 231 can interpolate G pixels in spatial positions of R and B pixels in the pixel arrangement of G in Fig. 18.
  • a result of the interpolation processing by the pre-stage interpolation processor 231, that is, a result of interpolating the G pixels in the spatial positions of the R and B pixels in the pixel arrangement of G in Fig. 18 is shown in Fig. 20.
  • a G arrangement in Fig. 20 is accurately interpolated according to this interpolation processing. When attention is paid to the arrangement of G in Fig. 20, it is seen that G is arranged in a checkered pattern.
  • the post-stage interpolation processor 232 basically performs interpolation processing in the same procedure as the interpolation processing for the Bayer arrangement described above to apply interpolation processing to a pixel arrangement of G in Fig. 20. According to this interpolation processing for the Bayer arrangement, G is generated for all the pixels from G arranged in the checkered pattern.
  • a result of the interpolation processing by the post-stage interpolation processor 232 that is, a result of interpolating G arranged in the checkered pattern to generate G for all pixels is shown in Fig. 21.
  • the color coding shown in Fig. 6, and the color coding shown in Fig. 17 are shown in Fig. 22.
  • the resolution of the color coding shown in Fig. 17 is more advantageous in the horizontal and the vertical directions.
  • images including linear components are often distributed in the horizontal and the vertical directions. From this viewpoint, it can be said that the color coding shown in Fig. 17 is more advantageous than the color coding shown in Fig. 8.
  • the color coding shown in Fig. 17 has an advantage that it is possible to set an output size twice as large as the number of pixels. It goes without saying that it is seen from Fig. 22 that the resolution is remarkably high compared with that of the Bayer arrangement.
  • the interpolation processor 23B in the second embodiment it is possible to obtain the operational effects described above by using the interpolation processor 23B in the second embodiment directly as the interpolation processor 23. It is also possible to adopt a structure like an interpolation processor 23B' shown in Fig. 23 formed by modifying the interpolation processor 23B.
  • a changeover switch 233 is provided on an input side of the post-stage interpolation processor 232 to make it possible to selectively input one of an output signal of the pre-stage interpolation processor 231 and an output signal of the WB circuit 22 (see Fig. 1) (an input signal of the pre-stage interpolation processor 231) to the post-stage interpolation processor 232.
  • the invention can be suitably applied, in particular, to processing for applying color filter arrangement interpolation of main components for calculating a luminance component to an image signal to which color coding is applied.
  • the color coding uses color filters that have color components serving as main components in calculating a luminance component arranged to surround each of other color components with respect to a pixel arrangement in which respective pixels are arranged in a square lattice shape at equal intervals in the horizontal direction and the vertical direction.
  • the gist of the invention is not limited to specific color coding.
  • the two color coding examples shown in Figs. 8 and 17 are used as color coding in which each of R and B is surrounded by G serving as a main component in creating a luminance component.
  • color coding to which the invention is applicable is not limited to these two color coding examples.
  • the interpolation processing according to the invention even to, for example, color coding in which, with respect to a pixel arrangement of a square lattice shape, pixels are arranged in repetition of RGGG with four pixels in the horizontal direction as a unit a the first row, only G pixels are arranged in a second row, pixels are arranged in repetition of GGBG with four pixels in the horizontal direction as a unit in a third row, only G pixels are arranged in a fourth row, and, in the following rows, pixels are arranged with the four rows as a unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)
EP06767025A 2005-06-21 2006-06-20 Bildverarbeitungsvorrichtung und -verfahren, abbildungsvorrichtung und computerprogramm Withdrawn EP1793620A4 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2005180266 2005-06-21
JP2006169322A JP5151075B2 (ja) 2005-06-21 2006-06-19 画像処理装置及び画像処理方法、撮像装置、並びにコンピュータ・プログラム
PCT/JP2006/312366 WO2006137419A1 (ja) 2005-06-21 2006-06-20 画像処理装置及び画像処理方法、撮像装置、並びにコンピュータ・プログラム

Publications (2)

Publication Number Publication Date
EP1793620A1 true EP1793620A1 (de) 2007-06-06
EP1793620A4 EP1793620A4 (de) 2012-04-18

Family

ID=37570448

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06767025A Withdrawn EP1793620A4 (de) 2005-06-21 2006-06-20 Bildverarbeitungsvorrichtung und -verfahren, abbildungsvorrichtung und computerprogramm

Country Status (6)

Country Link
US (1) US7982781B2 (de)
EP (1) EP1793620A4 (de)
JP (1) JP5151075B2 (de)
KR (1) KR101253760B1 (de)
CN (4) CN101924947B (de)
WO (1) WO2006137419A1 (de)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2683167A1 (de) * 2011-02-28 2014-01-08 Fujifilm Corporation Farbbildformungsvorrichtung
US8922683B2 (en) 2012-07-06 2014-12-30 Fujifilm Corporation Color imaging element and imaging apparatus
RU2551649C2 (ru) * 2011-02-28 2015-05-27 Фуджифилм Корпорэйшн Устройство формирования цветного изображения
US9060159B2 (en) 2011-12-28 2015-06-16 Fujifilm Corporation Image processing device and method, and imaging device
US9100558B2 (en) 2011-12-27 2015-08-04 Fujifilm Corporation Color imaging element and imaging apparatus
US9143747B2 (en) 2012-07-06 2015-09-22 Fujifilm Corporation Color imaging element and imaging device
US9143760B2 (en) 2011-12-27 2015-09-22 Fujifilm Corporation Solid-state imaging device
US9148560B2 (en) 2012-07-06 2015-09-29 Fujifilm Corporation Imaging device, and image processing method
US9159758B2 (en) 2012-07-06 2015-10-13 Fujifilm Corporation Color imaging element and imaging device
US9167183B2 (en) 2011-12-28 2015-10-20 Fujifilm Corporation Imaging element and imaging apparatus employing phase difference detection pixels pairs
US9172926B2 (en) 2012-07-06 2015-10-27 Fujifilm Corporation Imaging device, and image processing method
US9172927B2 (en) 2012-07-06 2015-10-27 Fujifilm Corporation Imaging device, and image processing method
US9184195B2 (en) 2012-07-06 2015-11-10 Fujifilm Corporation Color imaging element and imaging device
US9204020B2 (en) 2011-12-27 2015-12-01 Fujifilm Corporation Color imaging apparatus having color imaging element
US9210387B2 (en) 2012-07-06 2015-12-08 Fujifilm Corporation Color imaging element and imaging device
US9219894B2 (en) 2012-07-06 2015-12-22 Fujifilm Corporation Color imaging element and imaging device
US9237319B2 (en) 2012-06-19 2016-01-12 Fujifilm Corporation Imaging device and automatic focus adjustment method
US9270955B2 (en) 2012-07-06 2016-02-23 Fujifilm Corporation Imaging apparatus that generates three-layer color data on the basis of a first mosaic image
US9313466B2 (en) 2011-03-09 2016-04-12 Fujifilm Corporation Color imaging element
US9325954B2 (en) 2011-12-28 2016-04-26 Fujifilm Corporation Color imaging element having phase difference detection pixels and imaging apparatus equipped with the same
US9325957B2 (en) 2011-12-28 2016-04-26 Fujifilm Corporation Image processing device, method, recording medium and imaging device
US9332199B2 (en) 2012-06-07 2016-05-03 Fujifilm Corporation Imaging device, image processing device, and image processing method
US9363493B2 (en) 2012-08-27 2016-06-07 Fujifilm Corporation Image processing apparatus, method, recording medium and image pickup apparatus
US9369686B2 (en) 2012-12-07 2016-06-14 Fujifilm Corporation Image processing device, image processing method, and recording medium
US9380230B2 (en) 2012-12-05 2016-06-28 Fujifilm Corporation Image capture device, anomalous oblique incident light detection method, and recording medium
US9431444B2 (en) 2011-02-21 2016-08-30 Fujifilm Corporation Single-plate color imaging element including color filters arranged on pixels
US9883152B2 (en) 2015-05-11 2018-01-30 Canon Kabushiki Kaisha Imaging apparatus, imaging system, and signal processing method

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100994003B1 (ko) 2001-01-31 2010-11-11 가부시키가이샤 히타치세이사쿠쇼 데이터 처리 시스템 및 데이터 프로세서
US7697046B2 (en) * 2005-12-07 2010-04-13 Hoya Corporation Image signal processing device and method of image signal processing including pixel array pattern determination
JP5085140B2 (ja) 2007-01-05 2012-11-28 株式会社東芝 固体撮像装置
JP4982897B2 (ja) * 2007-08-27 2012-07-25 株式会社メガチップス 画像処理装置
TWI422020B (zh) * 2008-12-08 2014-01-01 Sony Corp 固態成像裝置
JP5134528B2 (ja) * 2008-12-25 2013-01-30 ブラザー工業株式会社 画像読取装置
US8198578B2 (en) * 2009-06-23 2012-06-12 Nokia Corporation Color filters for sub-diffraction limit-sized light sensors
WO2012124184A1 (ja) * 2011-03-11 2012-09-20 富士フイルム株式会社 撮像装置およびその動作制御方法ならびに撮像システム
JP5378626B2 (ja) * 2011-03-11 2013-12-25 富士フイルム株式会社 撮像装置およびその動作制御方法
JP5353945B2 (ja) * 2011-05-13 2013-11-27 株式会社ニコン 画像処理装置および画像処理プログラム並びに電子カメラ
JP2013009293A (ja) * 2011-05-20 2013-01-10 Sony Corp 画像処理装置、画像処理方法、プログラム、および記録媒体、並びに学習装置
JP2013038504A (ja) * 2011-08-04 2013-02-21 Sony Corp 撮像装置、および画像処理方法、並びにプログラム
CN104025578B (zh) * 2011-12-27 2015-09-02 富士胶片株式会社 彩色摄像元件
JP5621058B2 (ja) * 2011-12-27 2014-11-05 富士フイルム株式会社 カラー撮像素子
JP6012375B2 (ja) * 2012-09-28 2016-10-25 株式会社メガチップス 画素補間処理装置、撮像装置、プログラムおよび集積回路
WO2015076139A1 (ja) * 2013-11-20 2015-05-28 京セラドキュメントソリューションズ株式会社 画像圧縮伸張装置および画像形成装置
KR20150091717A (ko) * 2014-02-03 2015-08-12 삼성전자주식회사 이미지의 색상 신호를 보간하는 방법 및 장치
KR102512521B1 (ko) * 2015-10-12 2023-03-21 삼성전자주식회사 텍스쳐 처리 방법 및 장치
EP3497928B1 (de) * 2016-08-31 2020-11-18 Huawei Technologies Co., Ltd. Multikamerasystem für zoom
CN106454286A (zh) * 2016-09-29 2017-02-22 杭州雄迈集成电路技术有限公司 一种g模式色彩滤波阵列
CN106303474A (zh) * 2016-09-29 2017-01-04 杭州雄迈集成电路技术有限公司 一种基于g模式色彩滤波阵列的去马赛克方法及装置
CN106298826A (zh) * 2016-09-29 2017-01-04 杭州雄迈集成电路技术有限公司 一种图像传感器
CN107529046B (zh) 2017-02-23 2024-03-08 思特威(深圳)电子科技有限公司 一种色彩滤镜阵列及图像传感器
EP3709623A1 (de) * 2019-03-15 2020-09-16 Aptiv Technologies Limited Verfahren zur simulation einer digitalen bildgebungsvorrichtung
US20220368867A1 (en) * 2019-09-26 2022-11-17 Sony Semiconductor Solutions Corporation Imaging device
CN112954436B (zh) * 2019-11-26 2023-04-25 西安诺瓦星云科技股份有限公司 视频图像画质调节方法、装置和视频处理设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030218680A1 (en) * 2002-04-26 2003-11-27 Ryuichi Shiohara Color area sensor and image pick up circuit
US6714242B1 (en) * 1997-12-08 2004-03-30 Sony Corporation Image processing apparatus, image processing method, and camera
US20040105015A1 (en) * 2002-07-12 2004-06-03 Olympus Optical Co., Ltd. Image processing device and image processing program
US20050058361A1 (en) * 2003-09-12 2005-03-17 Canon Kabushiki Kaisha Image processing apparatus

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3935588A (en) * 1971-04-20 1976-01-27 Matsushita Electric Industrial Co., Ltd. Color image pick-up system using strip filter
US6278803B1 (en) * 1990-04-26 2001-08-21 Canon Kabushiki Kaisha Interpolation apparatus for offset sampling signals
JP3064598B2 (ja) * 1991-12-02 2000-07-12 松下電器産業株式会社 相関検出補間方法および装置
JP2940762B2 (ja) * 1993-06-28 1999-08-25 三洋電機株式会社 手振れ補正装置を有するビデオカメラ
JP2816095B2 (ja) * 1994-04-26 1998-10-27 三洋電機株式会社 ビデオカメラの信号処理回路
JP3787927B2 (ja) * 1996-11-18 2006-06-21 ソニー株式会社 撮像装置及びカラー画像信号の処理方法
JP3704238B2 (ja) * 1997-03-31 2005-10-12 株式会社リコー 撮像装置
JP3212917B2 (ja) * 1997-08-26 2001-09-25 エヌイーシービューテクノロジー株式会社 走査線補間装置および走査線補間方法
JP4269369B2 (ja) * 1997-11-28 2009-05-27 ソニー株式会社 カメラ信号処理装置及びカメラ信号処理方法
JPH11220749A (ja) * 1997-11-28 1999-08-10 Sony Corp カメラ信号処理装置及びカメラ信号処理方法
EP1122651B1 (de) * 2000-02-03 2010-05-19 Hitachi, Ltd. Verfahren und Gerät zum Wiederauffinden und Ausgeben von Dokumenten und Speichermedium mit entspechendem Program
JP2002024815A (ja) * 2000-06-13 2002-01-25 Internatl Business Mach Corp <Ibm> 拡大画像データに変換するための画像変換方法、画像処理装置、および画像表示装置
JP3862506B2 (ja) * 2001-02-06 2006-12-27 キヤノン株式会社 信号処理装置およびその信号処理方法およびその動作処理プログラムおよびそのプログラムを記憶した記憶媒体
JP4011861B2 (ja) * 2001-03-29 2007-11-21 キヤノン株式会社 信号処理装置及び方法
JP3965457B2 (ja) 2001-04-12 2007-08-29 有限会社ビーテック 単板カラーカメラの市松配列緑色信号等インタリーブの関係にある画素信号の補間方法
JP3717863B2 (ja) * 2002-03-27 2005-11-16 三洋電機株式会社 画像補間方法
KR101001462B1 (ko) * 2002-08-19 2010-12-14 소니 주식회사 화상 처리 장치 및 방법, 영상 표시 장치와 기록 정보재생 장치
JP2004229055A (ja) * 2003-01-24 2004-08-12 Pentax Corp 画像処理装置
JP2004299055A (ja) * 2003-03-28 2004-10-28 Seiko Epson Corp 液体噴射装置、インクジェット式記録装置及び液体注入装置
JP4303525B2 (ja) * 2003-06-09 2009-07-29 富士フイルム株式会社 補間画素生成装置および方法
JP4298445B2 (ja) * 2003-09-12 2009-07-22 キヤノン株式会社 画像処理装置
JP2005107037A (ja) 2003-09-29 2005-04-21 Seiko Epson Corp 混合方法、トナーの製造方法およびトナー
JP3960965B2 (ja) * 2003-12-08 2007-08-15 オリンパス株式会社 画像補間装置及び画像補間方法
JP4446818B2 (ja) * 2004-07-06 2010-04-07 株式会社メガチップス 画素補間方法
JP4352331B2 (ja) * 2004-09-09 2009-10-28 富士フイルム株式会社 信号処理装置、信号処理方法及び信号処理プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6714242B1 (en) * 1997-12-08 2004-03-30 Sony Corporation Image processing apparatus, image processing method, and camera
US20030218680A1 (en) * 2002-04-26 2003-11-27 Ryuichi Shiohara Color area sensor and image pick up circuit
US20040105015A1 (en) * 2002-07-12 2004-06-03 Olympus Optical Co., Ltd. Image processing device and image processing program
US20050058361A1 (en) * 2003-09-12 2005-03-17 Canon Kabushiki Kaisha Image processing apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2006137419A1 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9431444B2 (en) 2011-02-21 2016-08-30 Fujifilm Corporation Single-plate color imaging element including color filters arranged on pixels
EP2683167A1 (de) * 2011-02-28 2014-01-08 Fujifilm Corporation Farbbildformungsvorrichtung
EP2683167A4 (de) * 2011-02-28 2014-11-26 Fujifilm Corp Farbbildformungsvorrichtung
RU2551649C2 (ru) * 2011-02-28 2015-05-27 Фуджифилм Корпорэйшн Устройство формирования цветного изображения
US9288454B2 (en) 2011-02-28 2016-03-15 Fujifilm Corporation Color imaging apparatus having color imaging element
US9313466B2 (en) 2011-03-09 2016-04-12 Fujifilm Corporation Color imaging element
US9204020B2 (en) 2011-12-27 2015-12-01 Fujifilm Corporation Color imaging apparatus having color imaging element
US9143760B2 (en) 2011-12-27 2015-09-22 Fujifilm Corporation Solid-state imaging device
US9100558B2 (en) 2011-12-27 2015-08-04 Fujifilm Corporation Color imaging element and imaging apparatus
US9325957B2 (en) 2011-12-28 2016-04-26 Fujifilm Corporation Image processing device, method, recording medium and imaging device
US9167183B2 (en) 2011-12-28 2015-10-20 Fujifilm Corporation Imaging element and imaging apparatus employing phase difference detection pixels pairs
US9325954B2 (en) 2011-12-28 2016-04-26 Fujifilm Corporation Color imaging element having phase difference detection pixels and imaging apparatus equipped with the same
US9060159B2 (en) 2011-12-28 2015-06-16 Fujifilm Corporation Image processing device and method, and imaging device
US9332199B2 (en) 2012-06-07 2016-05-03 Fujifilm Corporation Imaging device, image processing device, and image processing method
US9237319B2 (en) 2012-06-19 2016-01-12 Fujifilm Corporation Imaging device and automatic focus adjustment method
US9184195B2 (en) 2012-07-06 2015-11-10 Fujifilm Corporation Color imaging element and imaging device
US9159758B2 (en) 2012-07-06 2015-10-13 Fujifilm Corporation Color imaging element and imaging device
US9210387B2 (en) 2012-07-06 2015-12-08 Fujifilm Corporation Color imaging element and imaging device
US9270955B2 (en) 2012-07-06 2016-02-23 Fujifilm Corporation Imaging apparatus that generates three-layer color data on the basis of a first mosaic image
US9143747B2 (en) 2012-07-06 2015-09-22 Fujifilm Corporation Color imaging element and imaging device
US9172927B2 (en) 2012-07-06 2015-10-27 Fujifilm Corporation Imaging device, and image processing method
US9172926B2 (en) 2012-07-06 2015-10-27 Fujifilm Corporation Imaging device, and image processing method
US9219894B2 (en) 2012-07-06 2015-12-22 Fujifilm Corporation Color imaging element and imaging device
US9148560B2 (en) 2012-07-06 2015-09-29 Fujifilm Corporation Imaging device, and image processing method
US8922683B2 (en) 2012-07-06 2014-12-30 Fujifilm Corporation Color imaging element and imaging apparatus
US9363493B2 (en) 2012-08-27 2016-06-07 Fujifilm Corporation Image processing apparatus, method, recording medium and image pickup apparatus
US9380230B2 (en) 2012-12-05 2016-06-28 Fujifilm Corporation Image capture device, anomalous oblique incident light detection method, and recording medium
US9369686B2 (en) 2012-12-07 2016-06-14 Fujifilm Corporation Image processing device, image processing method, and recording medium
US9883152B2 (en) 2015-05-11 2018-01-30 Canon Kabushiki Kaisha Imaging apparatus, imaging system, and signal processing method
US10021358B2 (en) 2015-05-11 2018-07-10 Canon Kabushiki Kaisha Imaging apparatus, imaging system, and signal processing method
EP3093819B1 (de) * 2015-05-11 2019-05-08 Canon Kabushiki Kaisha Bildgebungsvorrichtung, bildgebungsverfahren und signalverarbeitungsverfahren

Also Published As

Publication number Publication date
CN101006731B (zh) 2012-03-21
US20070013786A1 (en) 2007-01-18
CN101006731A (zh) 2007-07-25
CN102256141B (zh) 2014-12-17
KR20080016984A (ko) 2008-02-25
JP2007037104A (ja) 2007-02-08
WO2006137419A1 (ja) 2006-12-28
KR101253760B1 (ko) 2013-04-12
CN101924947B (zh) 2011-12-28
EP1793620A4 (de) 2012-04-18
CN101924947A (zh) 2010-12-22
CN102256140A (zh) 2011-11-23
US7982781B2 (en) 2011-07-19
CN102256141A (zh) 2011-11-23
JP5151075B2 (ja) 2013-02-27

Similar Documents

Publication Publication Date Title
EP1793620A1 (de) Bildverarbeitungsvorrichtung und -verfahren, abbildungsvorrichtung und computerprogramm
JP3735867B2 (ja) 輝度信号生成装置
JP2931520B2 (ja) 単板式カラービデオカメラの色分離回路
US6847397B1 (en) Solid-state image sensor having pixels shifted and complementary-color filter and signal processing method therefor
JP5036421B2 (ja) 画像処理装置、画像処理方法、プログラムおよび撮像装置
TWI386049B (zh) A solid-state imaging device, and a device using the solid-state imaging device
US20130308022A1 (en) Mosaic image processing method
KR20090087811A (ko) 촬상 장치, 화상 처리 장치, 화상 처리 방법, 화상 처리방법의 프로그램 및 화상 처리 방법의 프로그램을 기록한기록 매체
JP2001268582A (ja) 固体撮像装置および信号処理方法
US8520099B2 (en) Imaging apparatus, integrated circuit, and image processing method
JP2733859B2 (ja) カラー撮像装置
WO2007145087A1 (ja) 撮像装置及び信号処理方法
JP4305071B2 (ja) 信号補正方法
JP4635769B2 (ja) 画像処理装置、画像処理方法および撮像装置
JP5291788B2 (ja) 撮像装置
JP4687454B2 (ja) 画像処理装置および撮像装置
JP4385748B2 (ja) 画像信号処理装置及び画像信号処理方法
JP4329485B2 (ja) 画像信号処理装置及び画像信号処理方法
JP4962293B2 (ja) 画像処理装置、画像処理方法、プログラム
JP2012124857A (ja) ノイズ除去システムおよび撮像装置
JP5056927B2 (ja) 画像処理装置、画像処理方法および撮像装置
Lin et al. Resolution characterization for digital still cameras
JP2003143613A (ja) 撮像装置
JP2017175500A (ja) カラー画像撮像方法、カラー画像補間処理方法および撮像装置
JP2001177767A (ja) 画像データ・フィルタリング装置および方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070213

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

DAX Request for extension of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20120319

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 9/07 20060101AFI20120313BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20140103