WO2009014051A1 - 映像処理装置および映像処理プログラム - Google Patents

映像処理装置および映像処理プログラム Download PDF

Info

Publication number
WO2009014051A1
WO2009014051A1 PCT/JP2008/062872 JP2008062872W WO2009014051A1 WO 2009014051 A1 WO2009014051 A1 WO 2009014051A1 JP 2008062872 W JP2008062872 W JP 2008062872W WO 2009014051 A1 WO2009014051 A1 WO 2009014051A1
Authority
WO
WIPO (PCT)
Prior art keywords
noise
color
signal
unit
luminance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2008/062872
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
Takao Tsuruoka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corp filed Critical Olympus Corp
Publication of WO2009014051A1 publication Critical patent/WO2009014051A1/ja
Priority to US12/690,332 priority Critical patent/US8223226B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
    • H04N25/136Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements using complementary colours
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters

Definitions

  • the present invention relates to placement of random noise caused by an imaging system, and in particular, the color noise component can be made highly accurate by enhancing the color noise amount related to the color noise.
  • An image signal obtained from an imaging system including an imaging ladder, an analog circuit attached thereto, an AZD converter, etc. generally contains a noise component.
  • the above noise components can be fixed pattern noise and random noise; ⁇ SIJ.
  • Fixed pattern noise is noise mainly caused by an imaging element represented by a defective pixel or the like.
  • random noise is generated by the imaging sensor circuit and is close to white noise and has special characteristics.
  • an elephant signal is separated into a luminance signal and a color difficulty, and an edge is determined from the luminance signal and the color difficulty.
  • a method of performing the ossification process in the area other than the edge part, and the color f is disclosed.
  • an elephant signal is separated into a luminance signal and a color code, and the amount of noise noise and the amount of color noise are estimated in predetermined area units. place An example of doing business is disclosed. This makes possible: 5 noises per region.
  • a motion component is detected from a video signal, and a signal that has been dispersed based on the detected motion component.
  • Limit values and;) controlling the ⁇ coefficient are disclosed. This makes it possible to perform noise reduction processing in which side effects such as afterimage caused by motion components are suppressed.
  • JP-A JP-A
  • the purpose is to obtain a high-quality image signal by applying a colored noise
  • the location according to an aspect of the present invention is taken from the shooting position
  • An analysis unit that noises an elephant signal, separates a ttflSI ⁇ elephant signal into a luminance signal and a color signal, and then extracts an area of a predetermined size, and Based on the luminance signal of the area extracted by the And Daikazari calculation unit for calculating a, based on the color signals of ⁇ region, and Er representative hue calculation unit that calculates a representative eye value of his own region, ⁇ & ⁇ 5 generations 2 1 4 1
  • Color noise estimation for removing the amount of color noise based on the representative ⁇ value calculated by the 3 ⁇ 4 ⁇ degree calculation unit and the representative hue value calculated by the l & t's own representative fef eye calculation unit; tfifB color noise Based on the color noise amount estimated by estimation, the color noise is resolved by providing color noise processing to the color signal of the dislike region.
  • the position of IJ H is a position where noise ⁇ ⁇ is applied to a 53 ⁇ f image signal taken in time series from the image
  • the image signal of ⁇ IJ is a luminance signal and a color
  • the luminance signal of the divided portion that separates into the signal and sequentially extracts the region of a predetermined size, and the luminance signal of the region extracted by the divided portion, the representative luminance value of the shaded region Calculated by the substitution degree calculation unit that calculates the value of t, and the generation phase calculation unit that calculates representative eye values of the self-region based on the color signal of the tfit self-region, and the self-expression calculation unit
  • the color noise estimation for estimating the amount of color noise, the color signal of the above-mentioned region
  • the signal calculation unit that calculates the signal from the color signal of the past region where the corresponding noise processing was performed, and tin self-color noise estimation ⁇ Based on the representative luminance value and the representative hue value calculated by the representative hue calculation unit, the color noise estimation for
  • a program according to another aspect of the present invention is a program for performing noise processing on an image signal captured from a camera, and separating the ttlf signal into a luminance signal and a color signal.
  • a logic program is an S wave processing program that performs noise processing on a video signal captured in time series from shooting, and includes a luminance signal and a color signal of the S motion signal.
  • the delivery step which separates into areas and sequentially extracts areas of a predetermined size, and in the above difficult step!
  • the representative calculation step Based on the luminance signal of the extracted region, the representative calculation step of calculating the representative luminance value of the self region, and the representative hue value of the self region based on the color signal of the knitting region
  • the color noise estimation step for estimating the amount of color noise based on the representative calculation step of calculating the color noise, the representative luminance value and the lift self representative eye value, and the color signal and the color space of the color region.
  • the color brightness is estimated by obtaining a representative luminance value and a tabular hue value for each area of a predetermined size, and the color noise is calculated based on the estimated color noise. Because it does, it is possible to do high precision color noise «w. Brief description of the drawings
  • FIG. 1 is a diagram for explaining a configuration of a projector in the first embodiment.
  • FIG. 2A is a diagram showing a configuration of a ⁇ ya Ml ⁇ color filter.
  • 8 062872 1 6-FIG. 2B is an explanatory diagram regarding separation of luminance signal / color signal and extraction of IS area.
  • FIG. 3 is a diagram for explaining four hue regions.
  • Fig. 4 is a block diagram of color noise estimation.
  • FIG. 5A is a diagram showing a system of color noise amount which is equal to the signal level.
  • FIG. 5B is a diagram plotting the amount of color noise in four hue regions of 45 °, 135 °, 225 °, and 315 °.
  • FIG. 5C is a diagram for explaining the simplicity of the color noise model.
  • FIG. 5D is a diagram showing the amount of color noise CN S calculated from the simplified color noise model
  • Fig. 6 is a block diagram of the color noise estimation of the ⁇ I avoidance component.
  • Figure 7 is a block diagram of color noise ifi ⁇ .
  • FIG. 8A is a view showing an example of another configuration of the color noise.
  • Fig. 8 ⁇ shows an example of filter coefficients that are conflicted with the relation ROM.
  • Figure 8 C is a diagram showing an example of the relationship between the color noise amount CN S and filter ⁇ Type.
  • Figure 9 is a block diagram of noise estimation.
  • FIG. 10 is a block diagram of luminance noise estimation; of ⁇ ij configuration.
  • Figure 11 is a block diagram of the luminance noise IS ⁇ .
  • Figure 12 is a diagram of the luminance noise in the ⁇ IJ configuration.
  • Fig. 13 ⁇ shows the configuration of the complementary color filter.
  • FIG. 13B is an explanatory diagram regarding separation of the luminance signal Z color signal and extraction of the area.
  • FIG. 14 is another configuration diagram of the arrangement in the first form of 1 ⁇ 4 fe.
  • Figure 15 shows the overall processing flow of the signal processing flow in the first embodiment. is there.
  • Figure 15B shows the color noise estimation of the signal flow.
  • Figure 15 C is the color noise processing flow of the signal processing flow.
  • Fig. 15D is a flow of luminance noise estimation processing in the flow of signal processing.
  • Figure 15 E is the flow of luminance noise in the flow of the signal
  • FIG. 16 is a diagram for explaining the configuration of a projector in the second embodiment.
  • FIG. 17A is a diagram showing the configuration of a color difference line injection-type complementary color filter.
  • FIG. 17B is a diagram showing an example in which a luminance signal Y and color signals Cb and Cr are extracted from a woven field signal.
  • Fig. 17C is a diagram showing an example in which the luminance signal Y and the color codes Cb and Cr are extracted from the fine field signal.
  • FIG. 18 is a block diagram of color noise estimation 15 in the second embodiment.
  • Fig. 19 A shows a color signal Cr on the shelf and a color signal Cb on the red. Red (R), magenta (Ma), blue (B), cyan (Cy), green (G), yellow (Cr) in the CrCb plane It is a figure which shows each square area of Ye).
  • Fig. 19 B is a diagram for explaining the estimation of the color noise amount.
  • FIG. 20 is a block diagram of the color noise in the second embodiment.
  • FIG. 21 is a block diagram of color noise in the SIJ configuration in the second row.
  • FIG. 22 is a block diagram of the luminance noise in the second embodiment.
  • FIG. 24A shows the flow of the entire processing of the flow of signal processing in the second example of the embodiment.
  • Figure 24 B shows the color noise estimation process of the flow of the signal ⁇ in the second ⁇ 3 ⁇ 4 form example Is the flow of
  • FIG. 24 C is a flow of color noise in the flow of signal processing in the second embodiment.
  • FIG. 24 D is a flow of luminance noise processing in the flow of signal processing in the second! ⁇ Embodiment.
  • Figure 1 is a block diagram of the first form of this work.
  • the target signal received through the lens 100, the aperture 101, and the CCD 102 is amplified by an amplifier ("Gain” in the figure) 104, and is amplified by an A / D comparator ("A / D" in the figure) 105.
  • a / D comparator A / D in the figure
  • the buffer 106 is also connected to a pre-white balance adjustment unit (“PreWB” in the figure) 107, a photometric evaluation unit 108, and a point detection unit 109.
  • the pre-white balance 107 is connected to a gain ⁇ separation 104
  • the light measurement B 108 is connected to a reflection 101
  • a CCD 102 is connected to an amplifier 104
  • a point detection unit 109 is connected to an AF motor 110.
  • the division and integration unit 111 is provided with a representative hue calculation unit 112, an alternative degree calculation unit 113, a color noise 115, and a luminance noise 117 ⁇ .
  • the alternative calculation unit 112 is connected to the buffer 118 via the color noise estimation 114 and the color noise 115.
  • the sub-degree calculation unit 113 is subjected to color noise 114, luminance noise estimation 116, and luminance noise 1 117 ⁇ .
  • the noise reduction 116 is connected to the buffer 118 via the luminance noise 117.
  • the buffer 118 is a signal processor 119. It is continued.
  • the signal processing unit 119 is connected to the output unit 120 such as a memory card.
  • a control unit 121 such as a microcomputer includes an amplifier 104, an A / D converter 105, a pre-conversion tolerance f, a photometric detection unit 108, a point detection unit 109, a difficult-to-go-out unit 111, a substitution calculation unit 112,
  • the luminance calculation unit 113, the color noise estimation 13 114, the color noise 115, the luminance noise estimation 15 116, the luminance noise 117, the signal unit 119, and the output unit 120 are bi-directionally connected.
  • an external I / F unit 122 provided with an interface for performing setting of switching of various modes at the time of the switch, the shutter button, and the like is also bidirectionally performed by the control unit 121.
  • the signal from the sensor 103 disposed in the vicinity of the CCD 102 is controlled by the control unit 121 ⁇ .
  • the g3 ⁇ 4f signal will be described with reference to FIG. After setting the conditions such as ISO via the external I / F section 122, the pre-imaging mode is entered by pressing the shutter button. The image signal that has been sent via the lens 100, the aperture 101, and the CCD 102 is output as an analog signal.
  • the CCD 102 is a negative CCD in which a ⁇ f (Bayer) color filter is disposed on the front surface.
  • Fig. 2A shows the configuration of the color reduction filter. In the YEAR type, 2 ⁇ 2 pixels are placed at the 3 ⁇ 4 position, one red (R) filter and one blue (B) filter, and two green (G) filters.
  • the analog signal is given a predetermined ftt
  • the 1 «signal in the buffer 106 is subjected to the pre-white balance adjustment 107, the photometric evaluation unit 108, and the in-focus detection unit based on the control of the control unit 121. 8 062872
  • pre-white balance 107 a simple white balance coefficient is calculated by integrating a signal of a predetermined level for each color signal corresponding to a color filter. White balance is performed by multiplying the above coefficients into the amplifier 104 and multiplying the color signals by different gains.
  • the photometric surface unit 108 the set ISO speed and camera shake limit shutter etc. are added, and the electronic shutter data of the aperture stop 101 and the CCD 102 etc. are controlled so as to obtain 3 ⁇ 4 E exposure. .
  • the point detection unit 109 detects an edge key in the image signal, and obtains an ⁇ ! Signal by controlling the AF motor 110 so as to maximize this.
  • the labor hard part 111 separates the elephant signal into a luminance signal and a color signal based on the f IJ signal of the control unit 121, and longitudinally extracts a region where noise is to be treated from then on.
  • the luminance signal ⁇ and the color signals Cb and Cr are calculated in 2 ⁇ 2 pixel units with respect to the W-one-color filter. For example, 3 ⁇ 4, with respect to the 2X2 pixels G 10 G 01 B u, ( 1) as shown in the formula, the luminance signal Y ⁇ and color signals Cb ⁇ , is c R ⁇ force S calculated.
  • Luminance signal in area 3 ⁇ 4 Is a substitution degree calculation unit 113, to the declination noise ⁇ 117, the color signals 03 ⁇ 4 and 03 ⁇ 4 are represented by a substitution coefficient calculation unit 112, and color noise 115 ⁇ 3 ⁇ 4.
  • the substitute calculation unit 112 obtains the averages AV-Cb and AV-Cr of the color signals 03 and as indicated by the equation (2). ⁇ ⁇ Cb i; j
  • the 1 ⁇ 4 ⁇ ⁇ eye calculation unit 112 further obtains a representative eye value H of the fiber from the average AV-Cb and AV-Cr of the color signal.
  • a representative eye value H of the fiber from the average AV-Cb and AV-Cr of the color signal.
  • the hue range from 0 ° to less than 90 ° (hereinafter referred to as the 45 ° hue range), from 90 ° to less than 180 ° (hereinafter referred to as the 135 ° hue range), from 180 ° to less than 270 ° It shows four eye regions of the hue range (hereinafter referred to as 225 ° eye region) and the eye region of 270 ° or more and less than 360 ° (hereinafter referred to as 315 ° eye region).
  • the part 112 performs classification into the above four hue regions based on the positive and negative of the average AV-Cb and AV__Cr of the color signal as shown in Table 1.
  • the synthetic fineness calculation unit 113 obtains the average AV of the luminance signal as shown in equation (3).
  • the average AV 1 Y of the above-mentioned fullness signal is converted to color noise 3 ⁇ 4
  • the color noise, ft 3 ⁇ 4 11 is calculated based on the representative eye value H from the representative eye calculation unit 112 and the representative luminance value L of the IK intensity calculation unit 113, etc.
  • Estimated Honirre, Te Ko ⁇ Ru is of color noisyzu amount CN S.
  • Color noise 115 under the control of the control unit 121, based on the color noise estimation 15 of 114 force the color noisy's amount CN S, the color signal in the region of the partial ⁇ 111 Chikarara (3 ⁇ 4, color noise against CTij
  • the color signal Cb, Cr ' ⁇ after the color noise processing is stored in the buffer 118.
  • the luminance noise estimation 116 controls the control unit 121.
  • the luminance noise amount L is estimated based on the representative luminance value L of the light intensity calculation unit 113 and the like, and is distributed to the luminance noise 117.
  • the luminance noise 117 is based on the control of the control unit 121.
  • the noise removal amount ⁇ from the noise reduction 116 the luminance noise reduction is performed on the noise reduction signal 111 of the area 111, which is a difficult-to-reach part. Is stored in buffer 118 and saved 62872
  • the processing in the part extraction part 111, the substitution mesh calculating part 112, the substitution degree calculating part 113, the color noise estimation & 114, the color noise separation 115, the luminance noise estimation ⁇ 116, and the luminance noise 117 Based on the control of the control unit 121, synchronization is performed in area units.
  • the buffer 118 has been 3 ⁇ 4 ⁇ when it has been completed in the ⁇ tl area of one video signal! ⁇
  • the luminance signal Y 'after luminance noise reduction processing for the elephant signal and the color signals Cb, Cr, Cr' after color noise reduction processing will be performed.
  • the signal key unit 119 Based on the control of the control unit 121, the signal key unit 119 generates the luminance signal Y after luminance noise iSW and the color signals Cb and Cr ′ after color noise iS ⁇ W to the original image signal In the form of, convert to R, G, B signals). For example, '00, color signals Cb ⁇ , color relates Cr ⁇ noise! Color signals Cb after, ⁇ , Cr' luminance signal Y after the luminance noise management relating to the luminance signal Y ⁇ shown in (1) to oo In this case, as shown in equation (4), the noise-processed R'oo,
  • the signal processing unit 119 performs an emphasis key, a key, a contraction, and the like on the R, G, and B ′ signals subjected to the above-mentioned noise, and performs an output unit 120 ⁇ .
  • the output unit 120 describes the image signal to a word medium such as a magnetic disk or a memory card.
  • Fig. 4 shows an example of the configuration of the color noise estimation 114.
  • the mode selection HI selection unit 200, for parameter R0M 201, the gay output unit 202, the direct light application unit 203, the parameter length unit 204, the noise interpolation unit 205 , Noises section 206 consists of.
  • the representative hue calculation unit 112 and the parameter R0M 201 are connected to the model selection unit 200.
  • the fineness calculation unit 113, the model length unit 200, the gain calculation unit 202, and the gutter direct attachment unit 203 are parameter measurement units 204 ⁇ ⁇ .
  • the parameter measuring unit 20 has a noise interpolation unit 205 and a noise unit 206 ⁇ .
  • the noise interpolation unit 205 is connected to the noise capture unit 206, and the noise correction unit 206 is connected to the color noise 115 ⁇ .
  • the control unit 121 is bidirectionally disliked from the model unit 200, the gain calculation unit 202, the standard assignment unit 203, the parameter return length unit 204, the noise capture unit 205, and the noise IE unit 206.
  • the model and model selection unit 200 reads a representative eye value H of the t3 ⁇ 4 ⁇ eye calculation unit 112 and the like, and selects a color noise noise model to be used for parameter R0M201 force color noise estimation. Do.
  • FIG. 5A to 5D are explanatory diagrams of the »color noise model.
  • FIG. 5A is a diagram showing a curve obtained by plotting the color noise amount C at the signal level L. As shown in FIG. 5A, the signal level L increases in a quadratic curve.
  • (5) can be obtained by modeling the curve shown in Fig. 5 A with a quadratic function.
  • j and j are constant terms.
  • the color noise amount CN varies not only with the signal level, but also with the gain of the imaging device and the gain.
  • Figure 5A plots the amount of color noise with a ISO sensitivity of 100, 200, and 400 at 3% associated with the gain below as an example.
  • the color noise amount C also changes according to the hue range.
  • Figure 5 ⁇ is 45 °, 135 °, 225 °, 5 °. It plots the amount of color noise in the four eyes of. The individual curves have the form shown in the equation, but their coefficients depend on the ISO sensitivity, step, and eye area associated with the gain. It is different. Assuming that the gain is g, t is t, and the eye area is ⁇ , the color noise model is converted to ⁇ in consideration of the above to obtain equation (6).
  • a color noise model that gives the maximum color noise amount is selected as a reference color noise model in a certain eye area, and this is approximated by a predetermined number of broken lines.
  • the inflection point of the $ 3 ⁇ 4 ⁇ line is represented by coordinate data (L n , CN an ) consisting of the signal level L and the color noise amount CN e .
  • n indicates the number of inflection points.
  • a ⁇ coefficient sgt B for preparing another color noise model from the above-mentioned ⁇ ⁇ ⁇ color noise model is also prepared.
  • the 3E coefficient k sgte is calculated by the least squares method between each color noise model and light color noise model.
  • the standard color noise model is multiplied by the upper UE SUE coefficient k sgte .
  • Fig. 5D shows the calculation of the color noise amount C S from the Tffi simplified color noise model shown in Fig. 5C.
  • the gain is g, t
  • CN, CNen + 1 one CNe . (/-LJ + CN en (8)
  • the color noise amount C S is obtained by multiplying the obtained noise amount 03 ⁇ 4 by a coefficient ⁇ e !
  • the parameter R0M201 records the coordinate data (L n , CNJ and the positive coefficient k sgt e ) of the »color noise model corresponding to a plurality of eye areas 0.
  • the model section «! Part 112 Determines the eye area 0 based on the representative eye value H of the area, and reads out the reference color noise model and the coefficient for the color ROM corresponding to the calculated eye area ⁇ .
  • the noise model and the coefficient are transferred to the parameter length section 204.
  • the gain calculation section 202 increases the amplification amount at 104 based on the ISO sensitivity to be controlled by the control section 121 and the resistance value. Then, the control unit 121 obtains S3 ff ff of the CCM 02 from the sensor 103, and sends it to the parameter measurement unit 204.
  • the parameter control unit 121 controls the representative brightness value L force signal level 1 of the substitution degree calculation unit 113, and the gain g from the gain ⁇ of the 202 output unit of the gay output unit Set ⁇ from '
  • the noise interpolation unit 205 uses the signal level 1 from the parameter selection unit 204 and the coordinate data of the section (L n , C en ) and (CN fl n + 1 ) based on the equation (8).
  • the »color noise amount CN X in the bright noise model is calculated, and the noise is corrected to the noise correction unit 206.
  • the noise reduction unit 206 calculates the color noise amount based on the equation (9) from the color reduction noise amount from the parameter 31 scale 204 and the noise interpolation unit 205 force etc. to calculate the CN S.
  • the calculated color noise amount C S is calculated.
  • FIG. 6 shows an example of another configuration of the color noise estimation 114, shown in FIG.
  • the noise 3 ⁇ 4 E unit 206 force S is omitted, and the configuration has a noise tape 207 force 3 ⁇ 3 3 ⁇ 4. Since the configuration is equivalent to the color noise estimation 15 114 shown in FIG. 4, the same configuration is assigned the same name and number. Below, only the differences will be explained.
  • the noise calculation unit 112, the noise calculation unit 113, the gain calculation unit 202, and the direct marking application unit 203 perform noise calculation as noise noise.
  • the noise table section 207 is a series of color noises ⁇ S ⁇ 3 ⁇ 4 115 ⁇ .
  • the control unit 121 is disliked in both directions with the noise tape 207. 2872
  • the noise tape 207 calculates the representative luminance value gain of the area from the representative field value of the area, the representative field value of the area 112, and the substitution @ degree calculation section 113 based on the control of the control section 121. Read the gain g from the part 202 and t of the control part 121 force.
  • the noise tape 207 has a look-up table in which the relationship between the signal level, the gain and the noise amount is recorded. This look-up table is constructed by the same method as the one shown in FIG. 4 for calculating the amount of color noise and the amount of S color noise! /.
  • Color noisyzu amount CN S obtained by the noise table unit 207 is a color noisyzu 115.
  • FIG. 7 shows an example of the configuration of the color noise S ⁇ IS 115, which comprises an average color calculation unit 300 and a coring unit 301.
  • the minute output part 111 has an average color calculation part 300 and a coring part 301 ⁇ .
  • the color noise estimation unit 114 and the average color calculation unit 300 are sold on the coring unit 301.
  • the coring unit 301 has a buffer 118 ⁇ .
  • the control unit 121 is disliked in both directions with the average color calculation unit 300 and the coring unit 301.
  • the average color calculation unit 300 reads the color signals 03 ⁇ 4 and Cr ⁇ . From the diversion unit 111 based on the control of the control unit 121. After that, the average AV-Cb and AV_Cr of the color signal shown in the equation (4) are calculated and transferred to the coring unit 301. Based on the control of the control unit 121, the coring unit 301 generates color signals 03 ⁇ 4 and C from the division / extraction unit 111, averages AV1 Cb and AV_Cr of the average color calculation unit 300 color signals, and color noise 114 Roh read the I's amount of C S. After this, coring shown in the equations (10) and (11) is performed to obtain color signals Cb and Cr, which have been subjected to color noise.
  • the configuration to perform the color noise processing by the coring processing is not necessary to be limited to such a configuration.
  • An example of performing color noise reduction by filtering processing using a low pass filter will be described with reference to FIGS. 8A to 8C.
  • FIG. 8A shows an example of another configuration of the color noise 115.
  • the pixel that becomes the edge of the color noise WW is 2 ⁇ 2 pixels as shown in FIG. 2B, but for filtering processing, 4X4 including surrounding pixels Pixels shall be input.
  • the color noise iS ⁇ f 115 consists of a part R0 M 302, a finoleta measuring part 303, and a fino lettering part 30.
  • the color noise 114 and the R0M 302 have the filter Fujibe 303 ⁇ 3 ⁇ 4.
  • the difficult-to-understand part 111 and the filter selection part 303 are connected to the filtering part 304 ⁇ PC leak 008/062872
  • the funo lettering section 304 has a buffer 118 ⁇ .
  • the control unit 121 is bi-directionally converted to the fiinoleter selection unit 303 and the fizo lettering unit 30.
  • the filter section 303 reads the color noise amount C S from the color noise estimation 13 114 based on the control of the control section 121. Then, on the basis of the color noise amount CN S, read by 3 ⁇ 4 scale the filter coefficients used for Kakariga R0M302 force et Ropasufu filter processing.
  • Figure 8 B is, shows an example of a great-2 ⁇ has been that the filter coefficients in engagement R0M302, size is 3X3 pixels, frequency characteristics I 1 Raw 4 ⁇ up Typel ⁇ Type4 are saying.
  • Typel has high-frequency components remaining, and it has frequency characteristics that suppress the JllT-order and high-frequency components to Typel, Type2, Type3, and Type4.
  • Filter ⁇ unit 303 scale the frequency characteristics 1 "raw Typ e l ⁇ Type4 from the color noisyzu amount CN S. This selection, for example, the amount of color noisyzu shown in FIG. 8 C CN S and between the filters of Type As the color noise amount is larger, the frequency characteristic I "that suppresses high frequency components is selected.
  • the selected finoreta coefficients are ⁇ ! To the fino lettering section 304.
  • the filtering unit 304 reads the color signal Cb (and surrounding pixels from the diversion unit 111 based on the control of the control unit 121, and performs filtering processing using the filter coefficients of the filter length unit 303 force.
  • color signals Cb color noisyzu is reduced, u, the Cr 'upsilon, which is buffer 118 are stored.
  • Fig. 9 shows an example of the configuration of a luminance noise 3 ⁇ 43 ⁇ 43 ⁇ 4 116, the gain calculation unit 400, Tadashi Shimegi imparting unit 401, for parameter R0M402, parameter selection section 403, noise interpolation section 404, noise correction section 405.
  • the section 403 ⁇ ⁇ continues as the noise meter section 03 and the noise interpolation section 404 2008/062872
  • the noise interpolation unit 404 is noise; to the HE unit 05, the noise capture unit 405 3 ⁇ 4 ⁇ noise 117 ⁇ ⁇ .
  • the control unit 121 is bi-directionally connected to a gay output unit 400, a symbol direct assignment unit 401, a parameter scale unit 403, a noise interpolation unit 404, and a noise fcE unit 405.
  • the parameter scale unit 03 reads the representative luminance value L from the compensation degree calculation unit 113 under the control of the control unit 121.
  • the gain calculation unit 400 obtains the amplification amount in the amplifier 104 based on the information on the ISO sensitivity and the exposure condition that is controlled by the control unit 121 3, and performs the parameter selection unit 403 ⁇ 3. Further, the control unit 121 obtains S3 '(tfR of the CCD 102 from the sensor 103 and sends it to the parameter selection unit 403.
  • the parameter selection unit 403 calculates the substitution degree calculation unit 113, representative threshold value L
  • the luminance noise amount LN is estimated based on the information on the gain of the output portion 400 power and the control portion 121. The estimation of the luminance noise amount LN is as shown in Fig.
  • Equation (13) al + B, + ygt (13) where ⁇ ⁇ , ⁇ ⁇ and ⁇ are the gains g and 3 ⁇ 4 t, respectively. Similar to the color noise, multiply the function of equation (13) and calculate the luminance noise ⁇ by calculation each time. 008/062872
  • the maximum luminance noise, the luminance noise giving the amount, the mode, and the noise are selected as the reference luminance noise model, and these are simulated on the ⁇ 3 ⁇ 4 ⁇ line.
  • the inflection point of the broken line is represented by coordinate data (L n , n ) which is the signal level L and the luminance noise amount L force.
  • n indicates the number of inflection points.
  • a correction coefficient 13 ⁇ 4 is also prepared for deriving another luminance noise model from the upper woven quasi-brightness noise model.
  • the correction coefficient 13 ⁇ 4 is calculated by the least squares method between each luminance noise model and the reference luminance noise model.
  • the other luminance noise model can be derived from the reference luminance noise model by multiplying the »luminance noise model by the above-mentioned 33 ⁇ 4 E coefficient 13 ⁇ 4.
  • the method of calculating the luminance noise amount from the simplified luminance noise model is shown below. For example, assume that the noise amount L corresponding to a given signal level 1, gain g, «t is determined. First, it is searched to which section of the signal level 1 force s fresh luminance noise model belongs.
  • LN; LN n + 1 -LN n (/ _ L + LNn (1 4)
  • the luminance noise amount L is obtained by multiplying the calculated luminance noise amount 1 ⁇ by the second coefficient.
  • the parameter R0M402 records the coordinate data (L n , L n ) and the positive correlation of the above-mentioned 33 ⁇ 4 quasi-luminance noise model.
  • the Norma meter selector 403 separates the representative luminance signal level 1 from the decoration degree calculator 113, the gain g from the gain calculator 400, f from the gain g from the controller 121, and t from the controller 121.
  • the noise interpolation unit 404 Based on the control of the control unit 121, the noise interpolation unit 404 generates the signal level 1 from the parameter selection unit 403 and the coordinate data (L n , LN n ) of the section and (L n +1 , LN ⁇ ) ) Calculate 3 ⁇ 4 ⁇ luminance noise 3 ⁇ 4 L ⁇ in the reference luminance noise model based on the equation), noise ffi! E! 3 ⁇ 405 ⁇ 3 ⁇ 4 Based on the control of the control unit 121, the noise correction unit 405 uses the correction coefficient lV from the parameter selection unit 403 and the reference luminance noises * from the noise acquisition gap 404 based on the equations (15), calculate. The upper noise level is reduced to the luminance noise level.
  • FIG. 10 shows an example of another configuration of the luminance noise estimation unit 116, which is shown in FIG.
  • noise table portion 406 is omitted by 405 and omitted.
  • the configuration is equivalent to the luminance noise estimation 116 shown in FIG. 9, and the same configuration is assigned the same name and number. Only the differences will be described below.
  • the principle calculation unit 113, the gain calculation unit 400, and the standard groove direct attachment wedge P 401 are connected to the noise table unit 406.
  • the noise table unit 406 is connected to the luminance noise reduction unit 117.
  • Control unit 121 Is bi-directionally converted to the noise tapino 406.
  • the noise table unit 406 reads the representative brightness of the area from the representative frequency calculation unit 113, the gain g from the gain calculation unit 400, and the? Tt from the control unit 121 under the control of the control unit 121. Noise Tape, Nore Sound IM06,? There is a look-up table that records the relationship between gain and noise amount, and the look-up table is shown in FIG. It is constructed by the same method as the method of calculating
  • the noise table unit 406 obtains luminance noise 3 ⁇ L by referring to the Norec up table based on the representative luminance # 1, gain g, and the like.
  • the luminance noise obtained by the noise table unit 406 is subjected to the luminance noise ⁇ 3 ⁇ 4 117 ⁇ 3 ⁇ 4.
  • FIG. 11 shows an example of the configuration of the luminance noise ⁇
  • the separation and extraction unit 111, the representative luminance calculation unit 113, and the luminance noise estimation 13116 are connected to the coring unit 500.
  • the coring unit 500 has a buffer 118 ⁇ .
  • the control unit 121 is bi-directionally converted to the coring unit 500.
  • the coring unit 500 substitutes the luminance signal from the delivery / extraction unit 111 under the control of the control unit 121.
  • the representative brightness is read from the degree calculation unit 113, and the brightness noise 4L is read from the brightness noise estimation S3 ⁇ 4 i6. After that, the coring processing shown in equation (16) is performed to obtain the luminance signal Y, u subjected to the luminance noise “factor”.
  • the luminance signal Y which has been subjected to luminance noise ⁇ 3 ⁇ 4 ⁇ ⁇ , is stored in the buffer 118.
  • the configuration in which luminance noise is reduced by coring processing is not limited to such a configuration.
  • a configuration that performs luminance noise reduction by filtering ⁇ using a low pass filter is also possible.
  • FIG. 12 shows an example of another configuration of the luminance noise reduction portion 117.
  • the luminance noise area 117 shown in FIG. 12 although the pixel which becomes the luminance noise area is 2/2 pixels as shown in FIG. 2A, 4 ⁇ 4 pixels including surrounding pixels are input for the filtering process. It shall be.
  • the luminance noise drop section 117 includes a copy R0M 501, a filter selection section 502, and a filtering section 503.
  • the luminance noise estimation unit 116 and the number R 0 M 501 are connected to the filter selection unit 502.
  • the division output unit 111 and the filter selection unit 502 are connected to the filtering unit 503.
  • the filtering unit 503 is connected to the buffer 118.
  • the control unit 121 is bi-directionally connected to the filter selection unit 502 and the filtering unit 503. In the configuration of the luminance noise reduction unit 117 shown in FIG. 12, the input f from the representative degree calculation unit 113 shown in FIG. 1 is omitted because it is not necessary.
  • the filter selection unit 502 reads the luminance noise SL from the luminance noise estimation 15116 based on the control of the control unit 121. After that, based on the upper noise degree * LM, a filter coefficient to be used for low-pass filter processing is selected and read from the patch R0M501. As the filter coefficient, force S such as the coefficient shown in Fig. 8B is used. The filter selection unit 502 scales the filter coefficient based on the luminance noise. This selection is performed, for example, in the same manner as the relationship between the color noise amount and the filter shown in FIG. 8C. The selected filter coefficient is sent to the filtering unit 503. 2
  • the filtering unit 503 reads the luminance signal and surrounding pixels from the diversion unit 111, and performs filtering using the filter coefficient from the filter selection unit 502. Luminance noise is suppressed by the filtering process, and the d ⁇ degree signal Y 'is ⁇ !
  • the representative luminance value and the representative color phase value are determined in predetermined area units, and the color noise amount is applied based on the representative luminance value and the representative color value.
  • the above-mentioned color noise amount estimation processing is performed dynamically under different conditions for each bulging, and since different color fading noise models are used for each power region, it is possible to estimate the color noise amount with high accuracy and stability. It becomes. Further, the age at which the interpolation calculation is performed to calculate the amount of color noise is a line where interpolation calculation is difficult, and it is possible to reduce the cost of placement. On the other hand, a look-up tape is used to calculate the amount of color noise ⁇ , which makes it possible to quickly estimate the amount of color noise.
  • the estimation processing of the amount of noise is performed dynamically under different conditions for each bulging, and using the force-based luminance noise model, it is possible to estimate the amount of luminance noise with high accuracy and stability. Complement the calculation of the amount of luminance noise 2008/062872
  • Interpolation is a difficult line for ⁇ , which makes it possible to reduce the cost of ⁇ .
  • using a look-up table to calculate the luminance noise amount ⁇ enables fast estimation of the luminance noise amount.
  • Luminance noises ffi ⁇ Use coring processing for logic ⁇ 1 to focus on luminance noise components only, and ensure consistency with pixels other than luminance noise such as strong edges, so high-quality 13 ⁇ 4 phenomenon A signal is obtained.
  • filtering processing for luminance noise ⁇ ⁇ processing it is possible to focus on only the luminance noise component and obtain high-quality S ⁇ image signals.
  • low-pass filters are relatively difficult and allow speeding up and costing of the entire system.
  • the use of the imaging filter with the ⁇ -ya-mach filter placed on the front makes it possible to have high affinity with the imaging system of the product, and to be able to be combined with various systems.
  • the ⁇ f color filter is used as an imaging element, but it is not necessary to be limited to such a configuration.
  • Fig. 13 A shows the configuration of the color ⁇ ) 111 »complementary color filter.
  • the color ⁇ ) injection ⁇ is based on 2 ⁇ 2 pixels, and magenta (Mg), green (G), yellow (Ye) and cyan (Cy) are arranged one by one. However, the positions of Mg and G are reversed line by line.
  • Color ⁇ ) In the case of a complementary type complementary color filter, the light emitting part 111 is a luminance signal in 2 2 pixel units? And Iroshin ⁇ (3 ⁇ 4, calculates the Cr.
  • the representative hue calculator 112 calculates the representative hue control
  • the average of the color signal shown in equation (2) is calculated as the representative luminance calculator 113 calculates the representative luminance correction.
  • the configuration using the average of the luminance signal shown in equation (3) does not have to be limited to such a configuration.
  • a configuration using «I-wave components by low-pass filter processing, or a configuration using ⁇ ffl of adaptive filtering processing such as a bilateral filter may be possible.
  • the calculation accuracy and stability of the representative hue control and the representative luminance are improved, and it is possible to estimate the noise amount of the 3 ⁇ 4 g color with high accuracy.
  • the lens system 100 the aperture 101, the CCM 02, the temperature sensor 103, the amplifier 104, the A / D converter 105, the pre-white balance, the photometry tt
  • the configuration is integrated with the imaging unit including the unit 108, ⁇ 1 point detection unit 109, and AF motor 110, it is not necessary to be limited to such a configuration.
  • an elephant signal captured by a separate imaging unit is represented as raw data in the form of unprocessed raw data, and additional information such as exposure conditions for color filters of the CCD 102 is added to the header unit. It is also possible to reach from the language mentioned.
  • FIG. 14 shows a lens system 100, an aperture 101, a CCD 102, a temperature sensor 103, an amplifier 104, an A / D converter 105, a pre-white balance adjustment 107, a photometric evaluation unit 108, an in-focus point.
  • the detection unit 109 and the AF motor 110 are omitted, and the input unit 600 and the header reed angle portion 601 are operated.
  • the composition is the same as the composition shown in Fig. 1, and the same configuration is assigned the same name and number. Only the differences will be described below.
  • the input section 600 is connected to the buffer 106 and the data section 601 of the false alarm.
  • the control unit 121 is bidirectionally connected to the input unit 600 and the header information corner unit 601.
  • an external I / F unit 122 such as a mouse or keyboard, ⁇ (an elephant signal and data stored in the word medium are read from the input unit 600).
  • the g3 ⁇ f signal from the input unit 600 is sent to the puffer 106, and the header information is sent to the header information ⁇ part 601 ⁇ .
  • the header section 601 extracts the time frame from the header information and transfers it to the control section 121.
  • the subsequent flooding is equivalent to those performed in Figure: U3 ⁇ 4 ⁇ 1t.
  • the processing by hardware is described as It ⁇ , but it is not necessary to be limited to such a configuration.
  • the control unit 121 outputs additional information such as exposure conditions at the time of color filter ⁇
  • Fig. 15 ⁇ shows the flow of the software key used to make the computer process the above signal processing. Hereinafter, the image of each step will be described.
  • step S1 header information such as the projection signal and the exposure condition at the time of the color filter are read, and the process proceeds to step S2.
  • step S2 the image signal is separated into a luminance signal and a color signal as shown in equation (1), the eyebrow extraction is performed in a region of a predetermined size, for example, 2 ⁇ 2 pixels, and the process proceeds to step S3.
  • step S3 the color signal shown in equation (2) is averaged, and classified into the color phase range shown in Table 1 to obtain an “ ⁇ ⁇ eye value”.
  • step S4 by calculating the average of the luminance signal shown in equation (3), a representative luminance value is obtained. 2008/062872
  • step S5 color noise amount estimation processing is performed. This is followed according to the flow in Figure 15 B below.
  • step S6 the color noise of is performed. This process is carried out according to the flow of FIG. 15 C which will be described later.
  • step S7 the process of estimating the luminance noise amount is performed. This process is performed according to the flow of FIG. 15 D which will be described later.
  • step S8 the luminance noise is processed This process is performed in accordance with the flow in Fig. 15 E.
  • step S9 the noise-processed color signal and intensity signal are output.
  • step S10 it is determined whether the processing for all the areas in one g3 ⁇ f image signal has been completed, and it is determined that the processing has not been completed.
  • step S11 as shown in equation (4), ⁇ is converted into the original image signal of the imaging system, and the emphasis processing, ⁇ IS processing, processing, etc. are performed. Then, the process proceeds to step S 12.
  • step S12 the process is completed.
  • (Output an elephant signal and finish.
  • Fig. 15 B is a flow related to the color noise estimation process performed at step S5 in Fig. 15 A. Each step will be described below.
  • step S20 information such as temperature and gain is set from the read header information, and the process proceeds to step S21, where a parameter necessary for the header information is assigned, and a predetermined symbol or impact is assigned.
  • step S21 a plurality of reference color noise model coefficients for all hue regions are input, and the process proceeds to step S22
  • step S22 the reference color noise model and the positive coefficient are selected based on the representative hue value. 2008/062872
  • step S23 the coordinate data of the section of the color noise model to which the representative luminance value belongs and the corresponding correction coefficient are selected, and the process proceeds to step S24.
  • step S24 the amount of fine color noise is determined by the interpolation wedge shown in equation (8), and the process proceeds to step S25.
  • step S25 the color noise amount is determined by the correction process shown in equation (9), and the process proceeds to step S26.
  • step S26 the color noise amount is output and the process ends.
  • FIG. 15C is a flow relating to the color noise processing performed at step S6 of FIG. 15A. The process of each step will be described below.
  • step S30 the color noise amount estimated in step S5 of FIG.
  • step S31 the average of the color signal represented by equation (2) is calculated, and the process proceeds to step S32.
  • step S32 the coring process shown in equations (10) and (11) is performed, and the process proceeds to step S33.
  • step S33 the color signal subjected to the color noise ffi processing is output and the process ends.
  • Fig. 15D shows the luminance noise and estimation process performed at step S7 in Fig. 15A.
  • step S40 information such as gain is set from the read header information. However, if the header information does not have the required parameters, it assigns a predetermined symbol.
  • step S41 the reference luminance noise model and the coefficient are input, and the process proceeds to step S42.
  • step S42 the coordinate data of the section of the reference luminance noise model to which the representative luminance value belongs and the corresponding correction coefficient are selected, and the flow proceeds to step S43.
  • step S43 the reference luminance noise amount is obtained by the inter-capture process shown in equation (14), and the process proceeds to step S44.
  • step S44 it is represented by equation (15); The luminance noise amount is obtained by HE, and the process proceeds to step S45.
  • Fig. 15 E relates to luminance noise, which is performed at step S8 in Fig. 15 A. The process of each step will be described below.
  • step S50 the luminance noise amount estimated in step S7 of FIG. 15A is input, and the process proceeds to step S51.
  • step S51 a representative luminance value is input, and the process proceeds to step S52.
  • step S52 the coring process shown in equation (16) is performed, and the process proceeds to step S53.
  • step S53 the luminance noise ⁇ is outputted and the process ends.
  • the signal processing may be performed by software, and the same operation and effect can be obtained when connected by node software.
  • FIG. 16 is a block diagram of the second embodiment.
  • the calculation unit 700 is added to the configuration of the first embodiment shown in FIG. 1, and the color noise estimation S color noise estimation 701, the color noise 115, the color noise ® ⁇ 702, The luminance noise 3 ⁇ 43 ⁇ 4 1 ⁇ 4 7 is replaced with the luminance noise low ⁇ 03.
  • the configuration is equivalent to the first configuration, and the same configuration is assigned the same name and number.
  • the division unit 111 is connected to the representative hue calculation unit 112, the representative calculation unit 113, the ⁇ ′ ′ calculation unit 700, the color noise viewing unit 702, and the luminance noise reduction unit 703.
  • the representative hue calculation unit 112 Buffer 118 ⁇ is processed through noise estimation and color noise 702.
  • the substitution calculation unit 113 is connected to the color noise estimation and luminance noise estimation 15116.
  • the luminance noise estimation unit 116 is
  • the luminance noise reduction unit 703 is connected to the buffer 118.
  • the knocker 118 is connected to the signal processing unit 119 and the scattering calculation unit 700. 2008/062872
  • the 700 is the color noise
  • the control unit 121 is bi-directionally configured as a 3 ⁇ 4 ⁇ calculation unit 700, color noise estimation; 3 ⁇ 4 ⁇ 701, color noise, and luminance noise if i 415 415 703.
  • Fig. 17 A shows the configuration of the fine-flow complementary color filter.
  • the color injection 3 ⁇ 4fo ⁇ is 2 ⁇ 2 pixels, and cyan (Cy), magenta (Mg), yellow (Ye), and green (G) are arranged one by one. However, the positions of Mg and G are reversed line by line.
  • the image signal from the CCD 102 is calculated from upper and lower line forces, as shown in Fig. 17 A, and is composed of two field signals ( «field signal and odd field signal) separated into a line and a line. Ru.
  • 1 field time 1/60 seconds (hereinafter referred to as 1 field time) is assumed as the predetermined time interval, but the invention is not limited to 1/60 seconds.
  • One elephant signal can be obtained by combining the ⁇ and »field signals, but one ⁇ (the elephant signal is described as a frame signal.
  • the above frame signals are synthesized at intervals of 1/30 seconds.
  • the analog signal from the CCD 102 is given a predetermined width * by the amplifier 104, converted to a digital word by the A / D converter 105, and transferred to the buffer 106.
  • the two-field signal In other words, one frame signal can be stated, and it will be written in order according to. 2872
  • the frame signal in the buffer 106 is intermittently transferred to the pre-white balance adjustment unit 107, the photometric evaluation unit 108, and the focus control unit 109 at predetermined frame time intervals under the control of the control unit 121.
  • Ru Based on the control of the control unit 121, the minute sleeping unit 111 converts the even and 3 ⁇ 4 m field signals into a luminance signal Y and color signals ⁇ Cb and Cr as shown in equation (17). After that, an I1T-order extraction is performed on a region of interest pixels located near the attention pixel and a pixel of interest that is to be used for subsequent noise processing. In the present embodiment, 5 ⁇ 5 pixels are assumed as the area.
  • the luminance signal Y is a signal of 5 ⁇ 5 pixels
  • Cr is a 5 ⁇ 3 pixel or 5 ⁇ 2 pixel.
  • Figures 17 B and 17 C show an example of the regions extracted from the »: and 3 ⁇ 4 field signals, respectively. Fig.
  • 1 7 B shows an example of extracting the luminance signal Y and the color signals "Cb and Cr from the field signal.”
  • Color ⁇ word ⁇ Cr is 5 x 3 pixels
  • color word ⁇ 3 ⁇ 4 is 5 x 2
  • the target pixel to be the edge of the noise processing is the luminance signal Y and the color signal ⁇ Cr
  • the color signal ⁇ Cb It should be noted that if the target pixel position is different, conversely to the above There are color words and color quotients "3 ⁇ 4 r ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , examples also occur.
  • Fig. 17 C shows an example of extracting the luminance signal Y and the color division numbers Cb and Cr from the odd field signal.
  • Color ⁇ Probable ⁇ Cb is 5 ⁇ 3 pixels
  • Color ⁇ f word ⁇ Cr is 5 ⁇ 2 pixels.
  • the target pixels that become this ⁇ , noise logic ⁇ are the luminance signal ⁇ and the color signal ⁇ "Cb
  • the color signal ⁇ Cr is 3 ⁇ 4 f ⁇ . Note that if the target pixel position is different, the colors are reversed split word ⁇ Cr is OMRON will Ironan ⁇ Cb is ⁇ Shinare example also occur.
  • the pixel value of interest is ⁇ ⁇ for the luminance signal, C r T or Cb T 3 ⁇ 4 for the color signal, and of the field signal :!
  • the luminance signal is ⁇
  • the chrominance signal is C r ⁇ or Cb T 25 .
  • even field signals as shown in FIG. 17B and the forces performed by the pixel of interest with respect to Y T 24 and C r ⁇ are also referred to as “field signal and pixel power of interest, Cb 3 ⁇ 43 ⁇ 4 ⁇ 4 field signal The same holds true only for the different configuration.
  • the luminance signal ⁇ ⁇ in the region is transferred to the representative luminance calculation unit 113 and the luminance noise reduction unit 703, and the color signal Cb ⁇ Cr ⁇ is transferred to the representative hue calculation unit 112 and the color noise reduction unit 702.
  • the representative hue calculation unit 112 obtains the averages AV ⁇ Cb and AV_Cr of the color signals Cr T u under the control of the control unit 121. Further, from the averages AV_Cb and AV_Cr of the above color signals, the eye area 0 of the area is obtained by the equation (18).
  • the representative luminance calculation unit 113 obtains the average AV-Y of the luminance signal ⁇ and obtains the representative luminance L, as in the first embodiment.
  • the representative luminance ⁇ is estimated to be 15101 for color noise and ⁇ ⁇ ⁇ 3 ⁇ 4116 for luminance noise.
  • the luminance noise estimation 116 estimates the luminance noise SLN based on the representative luminance iilL from the proxy degree calculation unit 113 based on the control of the control unit 121, and carries out a luminance noise reduction ⁇ S3 ⁇ 4703 ⁇ . Based on the control of the control unit 121, the difference calculation unit 700 reads the target pixel in the region from the separation and extraction unit 111, and the target pixel in the region subjected to noise reduction processing two fields before from the knocker 118.
  • ⁇ Cr 24 Cr T 24 1 Cr, ⁇ 3 ⁇ 4 4 degree signal ⁇ ⁇ ⁇ degree noise «3 ⁇ 4703, the signal ACb or ACr is colored noise ⁇ noise portion 702 ⁇ 1.
  • the color noise section 702 is based on the control of the control section 121, based on the amount of color noise from the color noise estimation and the wording of the color signal from the scattering calculation section 700, Color noise processing is performed on the color signal of the target pixel. Color noise
  • the processed color signal is stored as a buffer 118 ⁇ 3 ⁇ 4.
  • the luminance noise reduction unit 703 is based on the control of the control unit 121 and based on the luminance noise amount from the luminance noise reduction mi 6 and the scattered signal of the luminance signal from the calculation unit 700, Perform luminance noise processing on the key signal of the target pixel of.
  • Luminance noise The luminance signal after resolution is stored in the buffer 118. 2008/062872
  • the knot 118 is assumed to be able to write two noise-free signals, ie one-frame signal, which is noise-processed, that is, it can be written along withrind
  • the processing in the division extraction unit 111, the representative hue calculation unit 112, the alternative degree calculation unit 113, the color noise estimation & 70 color noise estimation 702, the luminance noise estimation 15116, the luminance noise calculation 703, and the calculation unit 700. Is performed synchronously at the commanding position under the control of the control unit 121.
  • the buffer 118 receives the luminance signal ⁇ after the luminance noise reduction processing and the color signals Cb, Cr, after the color noise reduction processing for the 2-field signal that has been processed. It will be.
  • the signal processing unit 119 performs, on the basis of the control of the control unit 121, the same B f processing, emphasizing processing, translation processing, and ⁇ ⁇ ⁇ ⁇ processing on the Y, Cb, and Cr ′ signals subjected to the noise processing. Perform processing, etc., and send to output unit 120.
  • the output unit 120 stores the image signal on a medium such as a magnetic disk or memory card.
  • Fig. 18 shows an example of the configuration of the color noise estimation 15701, and the configuration section 200 power S is omitted from the configuration of the color noise estimation 114 shown in Fig. 4; , A correction accounting unit 802 is configured.
  • the configuration is equivalent to that of the color noise estimation unit 114 shown in FIG. 4, and the same configuration is assigned the same name and number. Only the differences will be explained below.
  • the representative luminance calculation unit 113, the parameter R 0 M 201, the gain calculation unit 202, and the standard direct assignment unit 203 are connected to the parameter selection unit 204.
  • the representative item calculation unit 112 and the R0M 800 are connected to the correction selection unit 801.
  • the correction coefficient selection unit 801 and the noise correction unit 206 are connected to the correction coefficient multiplication unit 802.
  • the third accounting unit 802 is connected to the color noise unit 702.
  • the control unit 121 is bi-directionally connected to the measurement unit 801 and the correction accounting unit 802.
  • the parameter selection unit 204 receives representative brightness from the substitution degree calculation unit 113. 2872
  • the gain calculation unit 202 obtains the amplification amount in the amplifier 104 based on the ISO sensitivity and the 'f ⁇ related to the exposure condition from the control unit 121, and transmits it to the meter scale unit 204.
  • the control unit 121 obtains the value of the CCD 102 from the sensor 103 and sends it to the parameter distance unit 204.
  • the parameter measurement unit 204 uses the representative luminance #L from the male calculation unit 113 to the signal level 1, the gain ff from the gain calculation unit 202 to the gain g, and the overhead information from the control unit 121? Set t. Next, read the reference color noise model and the positive coefficient from the parameter R0M201.
  • FIG. 1 ⁇ has color x ⁇ Cr on the horizontal axis and color x ⁇ Cb on the vertical axis Red (R), magenta (Ma), blue (B), cyan (Cy), green (G (G) on the CrCb plane ) And yellow (Ye) are shown.
  • a configuration using 12 eyes with an intermediate hue range of the above 6 eyes, and a memory color such as skin color, sky blue, and plant color ⁇ a structure to TO the eye area And any other configuration is possible.
  • RGB red
  • o blue
  • G green
  • FIG. 19B shows the inflection point of the broken line of the selected reference color noise model.
  • the inflection point is represented by coordinate data (L n , CN n ) consisting of signal level L and color noise 4CN.
  • n indicates the number of inflection points.
  • a correction signal k sgt is also provided for deriving another color noise model having different color signals s, gain g, and «t from the upper-erasing quasi-color noise model with respect to the eye area to which the reference color noise model belongs. .
  • the 3 ⁇ 4 PE coefficient k sgt is calculated by the least squares method between each color noise model and the male noise model. Deriving the other color noise model from the reference color noise model is performed by multiplying the fresh color noise model by the 3 ⁇ 4 ft E sharp sgl mentioned above. Furthermore, as shown in FIG. 19 B, a correction processor 8 is also provided for converting the color noise model of the other hue area from the eye area to which the reference color noise model belongs.
  • the correction coefficient i 3 ⁇ 4 fl is calculated by the least square method in the same manner as the « ⁇ coefficient k sgt, and conversion is performed by multiplication.
  • the parameter R0M 201 recognizes the coordinate data (L n , C n ) and the positive coefficient k sgt of the quasi-color noise model described above.
  • 3 ⁇ 4E Kakari ⁇ R0M800 is saying the UeTakeshi Seikakari cut off e.
  • the parameter unit 204 searches which section of the signal level 1 force reference color noise model it belongs to, and reads the coordinate data of the section from the parameter R0M 201.
  • the corresponding ffiE relation is read from the color signal s, gain g, and.
  • the section of the reference color noise model is sent to the noise interpolation unit 205, and the capture relation 3 ⁇ 4 k sgt is sent to the noise ffi! E unit 206.
  • the noise interpolation unit 205 Based on the control of the control unit 121, the noise interpolation unit 205 generates signal level 1 from the parameter selection unit 204 and coordinate data (L n , CN n ) of the section and (L n + 1 , C 3 ⁇ 4) to (20) Based on the equation, the door color noise in the reference color noise model is calculated, and noise; 2008/062872
  • CN, CN . "— CNn (/-L n ) + CN n (20)
  • the L n + 1 -L n noise correction unit 206 Based on the control of the control unit 121, the L n + 1 -L n noise correction unit 206 generates the correction coefficient k sgt from the parameter selection unit 204 and the reference color noise * 0 from the noise reduction unit 205 (21 ) calculates color noisyzu amount SC S shown in formula.
  • This color noise amount SCNJ is the color noise amount in the hue range to which the reference color noise model belongs.
  • Correction coefficient selecting unit 801 under the control of the control unit 121, the representative color calculation unit 112 reads the representative eye your region from Reads a correction coefficient 3 ⁇ 4K e corresponding to representative hues control from the correction coefficient transliteration R0M800.
  • the ffi E clerk fl is ff !!
  • ToTadashigakari calculation unit 802 under the control of the control unit 121, (22) As shown in equation, the noise, the color noisyzu 4SCN S from 3 ⁇ 4IE portion 206, 3 ⁇ 4E engagement 3 ⁇ 4k from the correction coefficient # 51 selecting section 801
  • the color noise * CS is calculated by multiplying e .
  • the calculated color noise SCN S is sent to the color noise 702.
  • a reference color noise model is set for each of a plurality of hue areas, and a mode selected using representative hue control is used. It is also possible. Furthermore, as with the color noise estimation 114 shown in FIG. 6 in the first embodiment, a configuration using a look-up table is also possible. Conversely, the first real 62872
  • FIG. 20 shows an example of the configuration of color noise ⁇ S 3 ⁇ 470 2, which is composed of upper PS ffi ⁇ 15 900, buffer 901, pixel extraction unit 902, and subtraction unit 903.
  • the minute sleeper 111 extends to the pixel extraction unit 902, and the pixel extraction unit 902 extends to the buttock 903.
  • the calculation unit 700 and the color noise estimation unit 1501 are connected to the upper unit 900.
  • the upper layer ⁇ 900 is connected to the ridge portion 903 via the buffer 901.
  • the part 903 is connected to the buffer 118.
  • the control unit 121 is bi-directionally connected to the upper 900, the pixel extraction unit 902, and the dark portion 903.
  • the following description is given for the even field signal and the target pixel Y T 3 ⁇ 4 C r ⁇ 24 as shown in FIG. 17 ⁇ , but even for the even field signal and the target pixel Y T Cb T and the field signal The same holds true only for the configuration of the area.
  • CN2 cr one CN cr ( ⁇ CN or ⁇ ACr 24 )
  • Equation (23) means that the ⁇ ⁇ signal exceeds the amount of color noise ( ⁇ ⁇ signal is less than negative) ⁇ imposes a restriction with the amount of color noise as the upper limit. As a result, the motion component is removed from the 1 ⁇ 2 signal, and only the color noise component is obtained. Above second color noise * C 2 PT / JP2008 / 062872
  • the pixel extraction unit 902 reads the target pixel C r from the distribution unit 111 based on the control of the control unit 121 and proceeds to the calculation unit 903.
  • the subtraction unit 903 reads the target color C r from the pixel extraction unit 902 and the second color noise from the buffer 901, and performs an eyebrow processing between the two as shown in equation (24).
  • Do color noise iS ⁇ by doing.
  • Cr ⁇ 3 ⁇ 4 ' Cr T 3 ⁇ 4 one (24) of interest noodles Cr color noise ⁇ ®3 ⁇ 4 ⁇ 3 ⁇ 4 is made' is fiber to the buffer 118.
  • the second color noise amount is determined by performing upper measurement on the signal, and color noise reduction processing is performed by subtraction processing.
  • the present invention is limited to such a configuration. It does not have to be done.
  • the second color noise amount may be determined by replacing the word with a zero value, and color noise may be generated by coring thigh.
  • FIG. 21 shows an example of another configuration of color noise.
  • the color noise ⁇ circle around (4) ⁇ 4702 comprises a replacing unit 904, an average color calculating unit 905, a coring unit 906, and a buffer 907.
  • the distributing unit 111 is connected to the average color calculating unit 905 and the coring unit 906.
  • the average color calculation unit 905 is connected to the coring unit 906.
  • the scatter calculation unit 700 and the color noise estimation; 3 ⁇ 4 ⁇ 4701 are connected to the hard part 904.
  • the permutation unit 904 is connected to the coring unit 906 via the buffer 907.
  • the coring unit 906 is connected to the buffer 118.
  • the control unit 121 is bi-directionally ⁇ with a replacement unit 904, an average color calculation unit 905, and a coring unit 906.
  • the average color calculation unit 905 reads the color signal Cr Cr ⁇ in the area from the distribution unit 111 based on the control of the control unit 121, and calculates the average HAV_Cr.
  • tAV_Cr is ⁇ 3 ⁇ 4 into the coring unit 906.
  • the substitution unit 904 causes the diffusion calculation unit 700 to give ⁇ 1 ⁇ 2 shown in equation (19). 2008/062872
  • the signal Deruta0tau 3 ⁇ 4 reads the color noise from the color noise estimation (5701, a comparison of both upper Symbol comparison which determines a force not force absolute value of the signal Derutaomikurontau 3 ⁇ 4 is included color noisyzu 4CfU this in, delta Ci ⁇ C ⁇ or a ⁇ delta noisyzu outside for Cr, the C or> and the # ⁇ 1 or noise range of a Cr 3 ⁇ 4>.
  • results ⁇ 1 ⁇ 2 signal ACr «
  • the coring unit 906 in the latter stage performs coring processing between the pixel of interest Cr T and the second color noise, so The reason is that nothing is done in the moving area.
  • the noise component is inconspicuous, and it is possible to cope with the above setting ⁇ .
  • the above arrangement is more difficult than the J Sf measurement process shown in Fig. 20, and the cost of the system can be reduced.
  • the second noise is sent to buffer 907.
  • Coring unit 906 under the control of the control unit 121, a pixel of interest 3 ⁇ 4 from the frequency ⁇ unit 111, the average ⁇ of No. Iroshin from the average color calculation unit 905
  • FIG. 22 shows an example of the configuration of the luminance noise ⁇ circle around (7) ⁇ , which is composed of the upper part of 1313;
  • the extracting unit 111 is connected to the pixel extracting unit 1002, and the pixel extracting unit 1002 is connected to the ridge unit 1003.
  • the upper 3rd half 1000 is connected to the first side 1003 via a buffer 1001.
  • the subtractor unit 1003 is connected to the buffer 118.
  • the control unit 121 is bi-directionally connected to the upper limit 31000, the pixel extraction unit 1002, and the ridge portion 1003.
  • the upper frame mOOO reads the luminance signal amount shown in equation (19) from the scatter calculation unit 700 and the luminance noise amount m from the luminance noise estimation 116, and compares the two.
  • Upper word ⁇ are to determine the force which the yarn tree value of 3 ⁇ 4 ⁇ degree signal ⁇ 24 is included in the luminance noise * ⁇ at determine shall, 3 ⁇ 4 ⁇ noise outside of AY ⁇ LN or a L ⁇ AY M, L>AY>-Let ⁇ of L be within the noise range.
  • LN 2 ⁇ 24 (LN> ⁇ 24 > One LN) ( 27 )
  • LN2 -LN (one LN ⁇ ⁇ 24 )
  • the equation (27) means that the age of the degree signal exceeds the amount of luminance noise (the negative signal is less than the value of the ⁇ 7 degree signal). By this, the motion component is removed from the ⁇ degree signal, and only the luminance noise component is obtained.
  • the above second luminance noise Sue is ⁇ 3 ⁇ 4 into ⁇ ⁇ ⁇ ⁇ 5 ⁇ .
  • the pixel extraction unit 1002 sends the target pixel ⁇ ⁇ 24 to the reading calculation unit 1003 from the distribution unit 111. ⁇ 1003, under the control of the control unit 121, the note target pixel Upsilon T 24 from the pixel extracting unit 1002, the buffer 1001 reads the second luminance noise * L 2, as shown in (28), We perform noise reduction by doing Fujitsu between both parties. ⁇ ⁇ 45-
  • the second luminance noise amount is determined by performing the upper determination on the language, and the luminance noise processing is performed by processing, and the data is generated.
  • FIG. 23 is a diagram showing an example of another configuration of the luminance noise reduction system 4703.
  • the luminance noise ⁇ ® ⁇ 703 includes a replacement unit 1004, a coring unit 1005, and a buffer 1006.
  • the heterogeneity calculation unit 113 shown in FIG. 16 is omitted.
  • the diversion part 111 and the decoration degree calculation part 113 are connected to the coring part 1005.
  • the scattering unit 700 and the luminance noise estimation unit 116 are connected to the replacement unit 1004.
  • the replacement unit 1004 is connected to the coring unit 1005 via the buffer 1006.
  • the coring unit 1005 is connected to the non-poor 118.
  • the control unit 121 is bi-directionally connected to the replacement unit 1004 and the coring unit 1005.
  • LN 2 ⁇ 2 (LN> ⁇ 24 > Primary LN) (29)
  • the coring unit 1005 in the latter stage performs coring processing between the pixel of interest and the second luminance noise * L 2.
  • the logic means that nothing is done in the moving area. In general, in the moving region, the ItSiJ ability in the high frequency region is visually reduced so that noise components are less noticeable, and even the above arrangement can be coped with. The above difficulties are more difficult than the upper setting processing shown in Fig. 2 and it is possible to reduce the cost of the system.
  • the obtained second luminance noise 3 ⁇ 4 L 2 is identified to the buffer 1006.
  • the coring unit 1005 receives the attention pixel ⁇ ⁇ from the diversion unit 111, the representative brightness f fiL from the degree calculation unit 113, and the second brightness noise * LN2 from the notch 1006. As shown in equation (30), luminance noise ⁇ is performed by coring.
  • ⁇ ' ⁇ 24 ⁇ ⁇ 24 + LN 2 (L-LN 2 ⁇ ⁇ 24 )
  • the pixel is stitched to the buffer 118 to the pixel of interest ⁇ ' where the luminance noise is made.
  • representative luminance values and representative hue values are determined in predetermined area units with respect to time-sequentially captured elephant signals, and representative luminance values and representative luminance values are obtained.
  • the color noise amount is adaptively estimated based on the eye value, and the motion component is removed from the color noise amount estimated for the 3 ⁇ 4 ⁇ signal obtained from the color signal of the past region where the noise has been made.
  • the system to calculate the color noise of This makes it possible to perform high-precision color noise, and high-quality image signals can be obtained.
  • the above-described color noise amount estimation process dynamically calculates ⁇ under different conditions for each expansion, and performs appropriate correction for each power range, so that highly accurate and stable color noise amount estimation becomes possible. Also, the color As far as the calculation of noise amount is carried out, the interpolation operation is a difficult task S, and the low cost H ⁇ of the system becomes possible. On the other hand, if the lookup table is ffled to calculate the color noise amount, it is possible to estimate the color noise amount at high speed.
  • the luminance noise amount is adaptively estimated based on the representative luminance value, and the motion component is removed from the luminance noise amount estimated with respect to the luminance signal obtained from the luminance signal of the past region subjected to noise reduction processing.
  • the luminance noise amount of 2 it becomes possible to carry out highly accurate luminance noise processing, and high quality image signals can be obtained.
  • the estimation process of the noise amount is performed dynamically under different conditions for each case, and the luminance noise model is used, so that it is possible to obtain a highly accurate and stable luminance noise amount.
  • is relatively difficult, and enables speeding up and costing of the entire system.
  • the setting process enables easy control and improves operability.
  • using age to zero is relatively easy, and makes the whole system faster and more cost effective.
  • Luminance noise processing ⁇ processing is a line that makes it possible to increase the speed and cost of the entire system.
  • coring processing for luminance noise « ⁇ reason only the luminance noise component can be focused on, and integration with pixels other than luminance noise such as power edges can be ensured, so high quality An elephant signal is obtained.
  • an imaging element having a color-difference surface-type complementary color filter disposed on the front side affinity with the imaging system of eyelids is high, and combination with various systems becomes possible.
  • the color concealed complementary color filter is used as the image pickup element, but it is necessary to be limited to such a structure.
  • the 3 ⁇ 4 ⁇ calculation unit 700 is configured to obtain the ⁇ term from the signal one frame before.
  • a plurality of time-sequentially continuous image signals captured by a separate imaging unit are processed in raw Raw data form, Furthermore, it is also possible to process from the word saying the accompanying “lf3 ⁇ 4” such as the color filter of the CCD 102 and the exposure condition at 3 ⁇ 4 to the header. 008/062872
  • the force of hardware is not limited to such a configuration.
  • a plurality of time-series video signals from the CCD 102 are treated as unprocessed raw data, and the exposure at the time of the color filter of the control unit 121 to the CCD 102
  • Fig. 2 4 A shows the flow related to software processing of the age that causes the computer to renew the above signal processing Description of the key of each step below Note that the same step numbers are assigned to steps that perform the same processing as the flow of signal processing in the first mode shown in Fig. 15 A.
  • step S1 a plurality of elephant signals are assigned.
  • step S60 After reading header information such as the exposure condition at the time of and color filter ⁇ , the process proceeds to step S60 In step S60, from one video signal, that is, the frame signal, the ⁇ field signal and The field signal is sequentially extracted, and the process proceeds to step S2.
  • step S2 as shown in equation (17), the 1 ⁇ image signal is separated into a luminance signal and a color signal, and a predetermined size, for example, 5 ⁇ 5 pixels
  • step S3 the hue area shown in equation (18) is determined, and classified into the six hue areas shown in FIG.
  • step S4 the display luminance value is determined by calculating the average of the luminance signal of the region, and the process proceeds to step S61, and in step S61, the past field signal subjected to the noise fiber processing, In the present embodiment, the field signal of 2 fields past is input, and the process proceeds to step S 62.
  • step S 62 it is expressed by equation (19) between the field signal of and the past field signal subjected to noise processing. ⁇ ⁇ Degree signal Calculating the noise signal and proceeding to step S63
  • the color noise amount is estimated. It is chrysanthemum according to the flow of B.
  • step S64 the color noise is reduced. This process is performed in accordance with the flowchart of FIG.
  • step S7 the luminance noise amount estimation process is performed in the same manner as the luminance noise estimation in the form of 1 ⁇ 1 shown in FIG. 15D, and the process proceeds to step S65.
  • step S65 the luminance noise ⁇ ®) ⁇ ⁇ is performed. This process is chrysanthemum according to the flow in Fig. 24 D.
  • step S9 the noised color signal and the luminance signal are output and a step is performed. Go to S10.
  • step S10 it is determined whether or not the processing for all the areas in one video signal has been completed, and then completed! If it is determined that / is determined to be ⁇ , the process proceeds to step S2, and if it is determined that the process is completed, the process proceeds to step S11.
  • step S11 the same process, emphasizing process, gradation process, process and the like are performed, and the process proceeds to step S66.
  • step S66 one S signal (an elephant signal, ie, a frame signal, synthesized from the noise and odd field signals and the odd field signal is output, and the process proceeds to step S67.
  • step S67 all are performed. It is determined whether or not the processing for the field signal is completed, and if it is determined that it is not completed, the process proceeds to step S60, and if it is determined that it is completed, the process is ended.
  • Fig. 24 B relates to the color noise estimation process performed at step S63 in Fig. 24 A.
  • the same step numbers are assigned to steps performing the same process as the flow of the color noise estimation process in the form of 1 ⁇ 1 shown in FIG. 15B. The process of each step will be described below.
  • step S20 a separation such as gain is set from the read header information. However, if there is a required parameter in the header information, the class # ⁇ assigns a predetermined mark ' ⁇ PC leak 008/062872
  • step S21 a reference color noise model and a positive coefficient are input, and the process proceeds to step S23.
  • step S23 coordinate data of the section of the reference color noise model to which the representative luminance value belongs and the corresponding HE data are stored, and the process proceeds to step S24.
  • step S24 the reference color noise amount is determined by the interpolation processing expressed by equation (20), and the process proceeds to step S25.
  • step S25 the color noise amount is determined by the ffiE process shown in equation (21), and the process proceeds to step S70.
  • step S70 3 ⁇ 4 IE coefficients for converting the eye area are input, and the process proceeds to step S71.
  • step S71 the ffiE coefficient for converting the hue range is selected based on the representative hue value, and the process proceeds to step S72.
  • step S72 the color noise amount is corrected by the correction process shown in equation (22) using the selected ffiE coefficient, and the process proceeds to step S26.
  • step S26 the corrected color noise amount is output and the process ends.
  • Fig.24C shows the color noise processing performed at step S64 in Fig.24A. The same step numbers are assigned to steps performing the same processing as the flow of color noise in the second embodiment shown in FIG. 15C. The process of each step will be described below.
  • step S30 the color noise amount estimated in step S63 in FIG. 24 is input, and the process proceeds to step S80.
  • step S80 the ⁇ 1 ⁇ 2 signal shown in equation (19) is input, and the process proceeds to step S81.
  • step S81 on the basis of the color noise amount, the upper Pfif direct setting shown in equation (23) is performed on the ⁇ 1 ⁇ 2 signal to obtain a second color noise amount.
  • step S 82 color noise processing is performed by performing subtraction processing of the second color noise amount shown in equation (24) for the pixel of interest in the region, and the process proceeds to step S 33.
  • step S33 the process is finished by outputting the color signal subjected to the color noise.
  • Fig. 24 D is the luminance noise performed in step S65 of Fig. 24 A! It is a flow of science. The same step number is assigned to the step of performing the same process as that of the luminance noise in the form of 1 shown in FIG. 15E. The following explains each step.
  • step S50 the luminance noise amount estimated in step S7 of FIG. 24A is input, and the process proceeds to step S90.
  • step S90 the ⁇ 3 ⁇ 4 degree signal shown in equation (19) is input, and the process proceeds to step S91.
  • step S91 the upper noise level shown in equation (27) is set for the ⁇ degree signal based on the luminance noise amount, and the second noise amount is obtained.
  • step S92 by performing the second luminance noise amount represented by equation (28) for the target pixel in the area, the luminance noise iS! M is selected, and the process proceeds to step S53.
  • step S53 the boat noise is generated and the signal is output with the fe3 ⁇ 4 degree and the process ends.
  • the signal processing may be performed by software, and the same function and effect as the age processed by hardware can be obtained.
  • the force described in the respective forms of the present invention is merely an example of application of the present invention, and the scope of the present invention is limited to the specific configuration of the form of the cat. is not.
  • a part of the configurations described in the first form of difficulty and the second male form may be combined to form another configuration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Processing Of Color Television Signals (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Color Television Image Signal Generators (AREA)
PCT/JP2008/062872 2007-07-23 2008-07-10 映像処理装置および映像処理プログラム Ceased WO2009014051A1 (ja)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/690,332 US8223226B2 (en) 2007-07-23 2010-01-20 Image processing apparatus and storage medium storing image processing program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2007190990A JP5165300B2 (ja) 2007-07-23 2007-07-23 映像処理装置および映像処理プログラム
JP2007-190990 2007-07-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/690,332 Continuation US8223226B2 (en) 2007-07-23 2010-01-20 Image processing apparatus and storage medium storing image processing program

Publications (1)

Publication Number Publication Date
WO2009014051A1 true WO2009014051A1 (ja) 2009-01-29

Family

ID=40281307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/062872 Ceased WO2009014051A1 (ja) 2007-07-23 2008-07-10 映像処理装置および映像処理プログラム

Country Status (3)

Country Link
US (1) US8223226B2 (enExample)
JP (1) JP5165300B2 (enExample)
WO (1) WO2009014051A1 (enExample)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5067852B2 (ja) * 2007-08-10 2012-11-07 キヤノン株式会社 画像処理方法及び画像処理装置
JP5529424B2 (ja) * 2009-03-11 2014-06-25 ソニー株式会社 画像処理装置、画像処理方法及びコンピュータプログラム
JP5868046B2 (ja) * 2010-07-13 2016-02-24 キヤノン株式会社 輝度信号作成装置、撮像装置、輝度信号作成方法、及びプログラム、並びに記録媒体
JP2013162248A (ja) * 2012-02-02 2013-08-19 Canon Inc 撮像装置及びその制御方法
JP6083897B2 (ja) * 2013-02-28 2017-02-22 株式会社 日立産業制御ソリューションズ 撮像装置及び画像信号処理装置
JP2015073452A (ja) * 2013-10-07 2015-04-20 株式会社エルメックス コロニーのカウント方法およびコロニー計数装置
JP2015127680A (ja) * 2013-12-27 2015-07-09 スリーエム イノベイティブ プロパティズ カンパニー 計測装置、システムおよびプログラム
JP6342360B2 (ja) * 2014-04-17 2018-06-13 株式会社モルフォ 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体
KR102415312B1 (ko) * 2017-10-30 2022-07-01 삼성디스플레이 주식회사 색 변환 장치, 이를 포함하는 표시 장치, 및 색 변환 방법
JP7311994B2 (ja) * 2019-03-27 2023-07-20 キヤノン株式会社 画像処理装置、撮像装置、画像処理方法、及びプログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005130297A (ja) * 2003-10-24 2005-05-19 Olympus Corp 信号処理システム、信号処理方法、信号処理プログラム
JP2005347821A (ja) * 2004-05-31 2005-12-15 Toshiba Corp ノイズ除去装置及び画像表示装置
JP2006023959A (ja) * 2004-07-07 2006-01-26 Olympus Corp 信号処理システム及び信号処理プログラム

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6320970A (ja) * 1986-07-15 1988-01-28 Matsushita Electric Ind Co Ltd テレビジヨン信号の雑音抑圧装置
JPH0369276A (ja) * 1989-08-08 1991-03-25 Sharp Corp ノイズ低減回路
JP2946585B2 (ja) * 1990-01-09 1999-09-06 ソニー株式会社 ノイズ低減回路
JPH1013734A (ja) 1996-06-18 1998-01-16 Canon Inc 撮像装置
JP2000209507A (ja) 1999-01-12 2000-07-28 Toshiba Corp 固体撮像装置の低ノイズ化回路
JP3689607B2 (ja) 1999-12-15 2005-08-31 キヤノン株式会社 画像処理方法、装置および記憶媒体
US6738510B2 (en) * 2000-02-22 2004-05-18 Olympus Optical Co., Ltd. Image processing apparatus
JP3934506B2 (ja) * 2002-08-06 2007-06-20 オリンパス株式会社 撮像システムおよび画像処理プログラム
JP3762725B2 (ja) * 2002-08-22 2006-04-05 オリンパス株式会社 撮像システムおよび画像処理プログラム
JP3893099B2 (ja) * 2002-10-03 2007-03-14 オリンパス株式会社 撮像システムおよび撮像プログラム
JP3934597B2 (ja) * 2003-12-09 2007-06-20 オリンパス株式会社 撮像システムおよび画像処理プログラム
JP4547223B2 (ja) * 2004-09-28 2010-09-22 オリンパス株式会社 撮像システム、ノイズ低減処理装置及び撮像処理プログラム
EP1976268A1 (en) * 2005-12-28 2008-10-01 Olympus Corporation Imaging system and image processing program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005130297A (ja) * 2003-10-24 2005-05-19 Olympus Corp 信号処理システム、信号処理方法、信号処理プログラム
JP2005347821A (ja) * 2004-05-31 2005-12-15 Toshiba Corp ノイズ除去装置及び画像表示装置
JP2006023959A (ja) * 2004-07-07 2006-01-26 Olympus Corp 信号処理システム及び信号処理プログラム

Also Published As

Publication number Publication date
US8223226B2 (en) 2012-07-17
US20100188529A1 (en) 2010-07-29
JP5165300B2 (ja) 2013-03-21
JP2009027619A (ja) 2009-02-05

Similar Documents

Publication Publication Date Title
WO2009014051A1 (ja) 映像処理装置および映像処理プログラム
JP4718952B2 (ja) 画像補正方法および画像補正システム
CN102204258B (zh) 图像输入装置
JP6006543B2 (ja) 画像処理装置および画像処理方法
US20040075744A1 (en) Automatic tone mapping for images
KR20180096816A (ko) 이미지의 화질 개선을 위한 장치 및 방법
CN104869380A (zh) 图像处理设备和图像处理方法
JP4427001B2 (ja) 画像処理装置、画像処理プログラム
WO2009001955A1 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
WO2007007788A1 (ja) 色補正方法および色補正装置
JP5917048B2 (ja) 画像処理装置、画像処理方法およびプログラム
JP2009124580A (ja) 画像処理装置及び画像処理方法
JP4328956B2 (ja) デジタルカメラの制御方法及び装置
JP5372586B2 (ja) 画像処理装置
JP4916341B2 (ja) 画像処理装置及び画像処理プログラム
JP2009284009A (ja) 画像処理装置、撮像装置及び画像処理方法
JP4959237B2 (ja) 撮像システム及び撮像プログラム
JP2012165271A (ja) 画像処理装置、画像処理方法及びプログラム
JP2006148326A (ja) 撮像装置及び撮像装置の制御方法
JP2013165482A (ja) 画像処理装置、画像処理プログラム、および撮像装置
JP5693647B2 (ja) 画像処理方法、画像処理装置、及び撮像装置
JP3899144B2 (ja) 画像処理装置
JP2023090494A (ja) 撮像装置及びその制御方法、プログラム、記憶媒体
JP2008294969A (ja) 映像変換装置、映像変換方法、映像変換プログラム
JP2006114006A (ja) 階調変換装置、プログラム、電子カメラ、およびその方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08778223

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08778223

Country of ref document: EP

Kind code of ref document: A1