US20080143881A1 - Image processor and image processing program - Google Patents

Image processor and image processing program Download PDF

Info

Publication number
US20080143881A1
US20080143881A1 US12/032,098 US3209808A US2008143881A1 US 20080143881 A1 US20080143881 A1 US 20080143881A1 US 3209808 A US3209808 A US 3209808A US 2008143881 A1 US2008143881 A1 US 2008143881A1
Authority
US
United States
Prior art keywords
block
feature quantity
processor according
image processor
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/032,098
Other versions
US20110211126A9 (en
US8184174B2 (en
Inventor
Taketo Tsukioka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corp filed Critical Olympus Corp
Assigned to OLYMPUS CORPORATION reassignment OLYMPUS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TSUKIOKA, TAKETO
Publication of US20080143881A1 publication Critical patent/US20080143881A1/en
Publication of US20110211126A9 publication Critical patent/US20110211126A9/en
Application granted granted Critical
Publication of US8184174B2 publication Critical patent/US8184174B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • H04N1/4092Edge or detail enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6027Correction or control of colour gradation or colour contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • the present invention relates to an image processor and image processing program for applying the optimum edge enhancement to a subject.
  • frequency-band enhancement processing is applied so as to give the final image sharpness.
  • the simplest approach to the band enhancement processing is to rely on only one band enhancement filter having fixed characteristics. Still, many contrivances have been proposed so far in the art because of difficulty obtaining the best results depending on various subjects.
  • JP(A)2002-344743 discloses an example of analyzing a half tone structure in an image to control an enhancement filter depending on a screen angle.
  • Japanese Patent No. 2858530 sets forth a method for generating an edge component on the basis of the results of analysis in an edge direction such that there are much less fluctuations along the edge direction, and adding it to the present signal, thereby obtaining an image of better quality even at a decreasing S/N ratio.
  • an object of the invention is to provide an image processor that is capable of applying the optimum band enhancement to a subject in much smaller circuit size, and an image processing program.
  • an image processor adapted to correct the spatial frequency band of an input image, characterized by comprising a plurality of band correction means having mutually distinct band correction characteristics, a feature quantity computation means adapted to figure out a feature quantity in the neighborhood of each pixel of the input image, and a synthesizing means adapted to synthesize the outputs of said plurality of band correction means on the basis of said feature quantity.
  • the invention (1) is further characterized in that said synthesis means is operable to figure out a weight for each of said band correction means on the basis of said feature quantity, and produce the result of weighting by adding said weight to the result of band correction by each of said band correction means. According to this arrangement, it is possible to given the optimum weight to the result of band correction depending on the structure of the subject.
  • the invention (1) is further characterized in that said feature quantity computation means is operable to figure out the direction of an edge in said neighborhood as said given feature quantity. According to this arrangement, it is possible to figure out to which direction of horizontal, vertical and oblique directions the direction of the edge is close.
  • the invention (1) is further characterized in that said feature quantity computation means is operable to figure out the probability of said neighborhood belonging to a given image class as said given feature quantity. According to this arrangement, it is possible to judge the feature of the structure in the neighborhood of the pixel of interest in terms of a numerical value.
  • the invention (3) is further characterized in that said feature quantity computation means is operable to further figure out the reliability of the result of computation of the direction of said edge as said given feature quantity. According to this arrangement, it is possible to improve the reliability of the result of computation of the direction of the edge.
  • the invention (4) is further characterized in that said given image class includes any one of an edge portion, a stripe portion, and a texture portion. According to this arrangement, it is possible to provide specific judgment of the feature of the structure in the neighborhood of the pixel of interest.
  • the invention (1) is further characterized in that said feature quantity computation means is operable to figure out said feature quantity on the basis of the characteristics of the imaging system when said input image is taken. According to this arrangement, it is possible to figure out the feature quantity optimum for the subject.
  • the invention (1) is further characterized in that said synthesis means is operable to implement synthesis on the basis of the characteristics of an imaging system when said input image is taken. According to this arrangement, it is possible to implement the tone correction optimum for the subject.
  • the invention (7) is further characterized in that said characteristics of the imaging system are noise characteristics that provide a relation of the noise quantity vs. pixel value. According to this arrangement, it is possible to make noise correction on the basis of ISO sensitivity.
  • the invention (7) or (8) is further characterized in that said characteristics of the imaging system are information about the type and position of a pixel deficiency. According to this arrangement, it is possible to figure out the feature quantity on the basis of information about the type and position of the image deficiency.
  • the invention (7) or (8) is further characterized in that said characteristics of the imaging system are a sensitivity difference between pixels at which the same type color information is obtained.
  • the invention is further characterized in that said characteristics of the imaging system are the spatial frequency characteristics of the optical system. According to this arrangement, it is possible to figure out the feature quantity on the basis of the spatial frequency characteristics of the optical system.
  • the invention (12) is further characterized in that said characteristics of the imaging system are the spatial frequency characteristics of an optical LPF. According to this arrangement, it is possible to figure out the feature quantity on the basis of the characteristics of the optical LPF.
  • the invention (9) is further characterized in that said feature quantity computation means is operable to lower the precision with which said direction of the edge is figured out as said noise quantity grows large. According to this arrangement it is possible to avoid mistaking a structure that should not be taken as an edge for an edge.
  • the invention (9) is further characterized in that said feature quantity computation means is operable to lower the reliability of said direction of the edge as said noise quantity grows large. According to this arrangement, it is possible to prevent a failure in band correction processing when the noise level is high.
  • the invention (9) is further characterized in that said synthesis means is operable to determine said weight such that the more said noise quantity, the more likely the band correction characteristics of said weighted addition is to grow isotropic. According to this arrangement, it is possible to stave off a failure in band correction processing when the noise level is high.
  • the invention (9) is further characterized in that said synthesis means is operable to determine said weight such that the band correction characteristics of said result of weighed addition become small in a direction orthogonal to a direction along which there are successive pixel deficiencies. According to this arrangement, it is possible to stave off a failure in band correction processing when there is an image deficiency.
  • the invention (1) is further characterized in that there are two band correction means, each of which is a two-dimensional linear filter having a coefficient of point symmetry. According to this arrangement wherein the filter coefficient is of high symmetry, it is possible to reduce the number of computations at the band enhancement block and, hence, diminish the circuit size involved.
  • the invention (1) is further characterized in that one of said filters is such that the band correction characteristics in a particular direction have a negative value. According to this arrangement wherein the coefficient value often takes a value of 0, it is possible to make sure a further reduction in the number of computations at the band enhancement block and, hence, a much diminished circuit size.
  • the invention (1) is further characterized in that said band correction means is operable to apply given tone transform to said input image, and then implement band correction so that said feature quantity computation means can figure out said feature quantity with none of said given tone transform. According to this arrangement, it is possible to figure out said feature quantity with much higher precision.
  • an image processing program provided to correct image data for a spatial frequency band, which lets a computer implement steps of reading image data, implementing a plurality of band corrections having mutually distinct band correction characteristics, figuring out a given feature quantity in the neighborhood of each pixel of said image data, and synthesizing the outputs of said plurality of band corrections on the basis of said feature quantity. According to this arrangement, it is possible to run the optimum band enhancement processing for the read image on software.
  • an image processor and image processing program capable of applying the optimum band enhancement to a subject according to ISO sensitivity and the state of an optical system, with much smaller circuit size.
  • FIG. 1 is illustrative of the architecture of the first embodiment.
  • FIG. 2 is illustrative of the setup of the edge enhancement block in FIG. 1 .
  • FIG. 3 is illustrative of the operation of the direction judgment block in FIG. 1 .
  • FIG. 4 is illustrative of the operation of the direction judgment block in FIG. 1 .
  • FIG. 5 is illustrative of the noise table in FIG. 1 .
  • FIG. 6 is illustrative of the enhancement characteristics of the band enhancement block in FIG. 1 .
  • FIG. 7 is illustrative of one exemplary weight characteristics.
  • FIG. 8 is flowchart representative of the processing steps in the first embodiment.
  • FIG. 9 is a flowchart of how to compute an edge component in FIG. 8 .
  • FIG. 10 is illustrative of the architecture of the second embodiment.
  • FIG. 11 is illustrative of the setup of the edge enhancement block in FIG. 10 .
  • FIG. 12 is illustrative of the aberrations of the optical system.
  • FIG. 13 is illustrative of the coefficient for compensating for MTF deterioration due to aberrations.
  • FIG. 14 is illustrative of the coefficient for compensating for MTF deterioration due to aberrations.
  • FIG. 15 is a flowchart of how to figure out an edge component in the second embodiment.
  • FIG. 16 is illustrative of the architecture of the third embodiment.
  • FIG. 17 is illustrative of the setup of the edge enhancement block in FIG. 16 .
  • FIG. 18 is illustrative of a correction coefficient table corresponding to the type of optical LPF.
  • FIG. 19 is a flowchart of how to figure out an edge component in the third embodiment.
  • FIGS. 1 to 9 are illustrative of the first embodiment of the invention.
  • FIG. 1 is illustrative of the architecture of the first embodiment
  • FIG. 2 is illustrative of the setup of the edge enhancement block in FIG. 1
  • FIGS. 3 and 4 are illustrative of the operation of the direction judgment block in FIG. 1
  • FIG. 5 is illustrative of the noise table in FIG. 1
  • FIG. 6 is illustrative of the enhancement characteristics of the band enhancement block in FIG. 1
  • FIG. 7 is illustrative of one exemplary weight
  • FIG. 8 is a flowchart of the RAW development software in the first embodiment
  • FIG. 9 is a flowchart of how to figure out the edge component in FIG. 8 .
  • the architecture of the first embodiment according to the invention is shown in FIG. 1 .
  • the embodiment here is a digital camera shown generally by 100 that is built up of an optical system 101 , a primary color Bayer CCD 102 , an A/D converter block 103 , an image buffer 104 , a noise table 105 , an edge enhancement block 106 , a color interpolation block 107 , a YC transform block 108 , a color saturation correction block 109 , an YC synthesis block 110 , a recording block 111 and a control block 112 .
  • the primary color Bayer CCD 102 is connected to the image buffer 107 by way of the A/D converter block 103 , and the image buffer 104 is connected to the recording block 111 by way of the color interpolation block 107 , YC transform block 108 , color saturation correction block 109 and YC synthesis block 110 in order.
  • the image buffer 104 is also connected to the YC synthesis block 110 by way of the edge enhancement block 106 .
  • the YC transform block 108 is also connected directly to the YC synthesis block 110 .
  • the control block 112 is bi-directionally connected to the respective blocks.
  • FIG. 2 is illustrative of the details of the setup of the edge enhancement block 106 in FIG. 1 .
  • the edge enhancement block 106 is built up of a direction judgment block 121 , a tone control block 122 , a band enhancement block A 123 , a band enhancement block B 124 , a weighting coefficient determination block 125 and a weighting block 126 .
  • the image buffer 104 is connected to the direction judgment block 121 , and to the band enhancement blocks A 123 and B 124 as well by way of the tone control block 122 .
  • the band enhancement blocks A 123 and B 124 are each connected to the weighting block 126 .
  • the direction judgment block 121 is connected to the weighting block 126 by way of the weighting coefficient determination block 125 .
  • the operation of the digital camera 100 is explained.
  • the shutter (not shown) is pressed down, it causes an optical image formed through the optical system 101 to be photoelectrically converted at the primary color Bayer CCD 102 and recorded as an image signal in the image buffer 104 by way of the A/D converter block 103 .
  • the edge component for band correction is first computed at the edge enhancement block 106 , and at the color interpolation block 107 , color components missing at each pixel are compensated for by interpolation of the recorded image.
  • the signal interpolated at the color interpolation block 107 is then transformed at the YC transform block 108 into a luminance signal and a color difference signal after tone correction, the luminance signal sent out to the YS synthesis block 110 and the color difference signal to the color saturation correction block 109 . Thereafter, the color saturation of the color difference signal is controlled at the color saturation correction block 109 , and sent out to the YC synthesis block 110 .
  • the edge component computed at the edge enhancement block 106 is sent out to the YC synthesis block 110 .
  • the YC synthesis block 110 receives the color difference signal with its color saturation controlled, the luminance signal and the edge component, it first adds the edge component to the luminance signal to create a luminance signal with its band corrected. And there is known processing implemented to combine that luminance signal with the color signal for conversion into an RGB signal, and the result is sent out to the recording block 111 .
  • the entered signal is compressed and recorded in a recording medium. Thus, the operation of the digital camera 100 is over.
  • FIG. 3 is illustrative of an exemplary pixel arrangement and filter used for the estimation at the direction judgment block of the edge direction and structure in the neighborhood of each pixel.
  • FIG. 3( a ) is illustrative of a pixel arrangement when the neighborhood center of the pixel of interest is not G
  • FIG. 3( b ) is illustrative of a pixel arrangement when the neighborhood center of the pixel of interest is G.
  • a filter Dh for measuring the magnitude of a horizontal fluctuation is also shown.
  • FIG. 3( b ) a filter Dv for measuring the magnitude of a vertical fluctuation is also shown.
  • the direction judgment block 121 reads only the G component out of the 5 ⁇ 5 neighborhood of each pixel, as shown in FIG. 3( a ) (the position of each blank takes a value of 0). And two such filters Dh and Dv are applied to the G component to examine the neighbors' edge structures.
  • the filter Dh is primarily applied to the position of P 1 , P 2 , and P 3 shown in FIG. 3( a )
  • the filter Dv is primarily applied to the position of P 4 , P 2 , and P 5 shown in FIG. 3( a ).
  • three measurements dh 1 , dh 2 , dh 3 and dv 1 , dv 2 , dv 3 are obtained for each direction. For instance, when the G pixel is in the neighborhood center, a set of specific computation formulae is given by (1).
  • the average value Ag of the G components in the neighborhood of the pixel of interest is at the same time found to estimate the average quantity of noise in the neighborhood on the basis of the noise characteristic information loaded in the noise table 105 .
  • the noise table 105 the characteristics of noise occurring at the primary color Bayer CCD 102 are stored in dependence on each ISO sensitivity at the taking time, as shown in FIG. 5 . Such characteristics, indicative of the relation of noise quantity vs. pixel value, are obtained by measurement beforehand.
  • N noise quantity
  • the following set of formulae (2) is used to compute from the measurements dh1 to dv3 and N and index r indicative of the direction of the structure in the neighborhood of the pixel of interest, an index q indicative of the type of the structure of the edge or the like, and reliability p about the estimated direction of the structure of the edge or the like.
  • min(x, y, z) are the minimum values of x, y and z
  • max(x, y, z) are the maximum values of x, y and z
  • clip (x, a) is the function for limiting x to less than a.
  • FIG. 4( a ) is illustrative of the aforesaid index r
  • FIG. 4( b ) is illustrative of the aforesaid index q.
  • the index r takes the value of 0; as it gets close to vertical as shown in FIG. 4( a )-( 1 ), the index r takes the value of 1; and as it gets close to an oblique 45° as shown in FIG. 4( a )-( 2 ), the index r takes the value of 0.5.
  • the index q takes the value of 0, and if it gets close to a flat portion or stripe as shown in FIGS. 4( b )-( 4 ) and 4 ( b )-( 6 ), the index q takes the value of 1.
  • the noise quantity N is large and the contrast of the structure in the neighborhood is weak
  • the index r is going to approach 0
  • the index q is going to approach 1
  • the index p is going to approach 0.
  • the index q has a function of figuring out the probability of the neighborhood of the pixel of interest belonging to a given image class such as edges or stripes, and with the index q, the feature of the structure in the neighborhood of the pixel of interest could be judged in terms of figures.
  • the image class may include, in addition to the edges or stripes, a texture portion. In the embodiment here, whether or not the feature of the structure in the neighborhood of the pixel of interest is an edge, a stripe or a texture portion could thus be specifically judged.
  • F 1 and F 2 have the following features: (1) each coefficient has point symmetry with respect to the center of the filter, (2) the frequency characteristics of F 1 are symmetric with respect to both directions, horizontal and vertical, and they may have the same response to any arbitrary direction, and (3) the frequency characteristics of F 2 have opposite signs in the horizontal and vertical directions.
  • the filter coefficient has all diagonal components of 0.
  • F 1 and F 2 are each configured in the form of a two-dimensional linear filter having a coefficient of point symmetry, and F 2 is configured such that band correction characteristics in a particular direction take a negative value.
  • F 1 makes sure very high symmetry about the filter coefficient, and F 2 often takes the coefficient value of 0.
  • the number of multiplications and additions at the band enhancement blocks A 123 and B 124 is much less than that for a general edge enhancement filter, making sure diminished circuit size.
  • V 1 , V 2 , p, q and r are sent out to the weighting block 126 , and p, q and r to the weighting determination block 125 .
  • weights W 1 and W 2 for the weighted sum of V 1 and V 2 at the weighting block 126 are computed.
  • a set of specific calculation formulae is given by the following (3).
  • s 1 , and s 2 is the weight optimum for the subject that is an edge or line
  • t 1 , and t 2 is the weight optimum for the subject that is a stripe form.
  • Each weight is experimentally determined beforehand in such a way as to become optimum for the subject.
  • FIG. 7 is illustrative of an example of s 1 and s 2 , and t 1 and t 2 . More specifically, FIG. 7( a ) is indicative of the weight characteristics for an edge, and FIG. 7( b ) is indicative of the weight characteristics for a stripe. A solid line and a broken line in FIG. 7( a ) are indicative of t 1 ( r ) and t 2 ( r ), respectively.
  • F 1 and F 2 shown in FIG.
  • the more likely the subject is to be an edge the closer W 1 , and W 2 gets to s 1 , and s 2 , respectively, and the more likely the subject is to be a stripe, the closer W 1 , and W 2 gets to t 1 , and t 2 , respectively.
  • the surer the judgment of whether the subject is a stripe or an edge the closer the weight calculated gets to the weight for each subject.
  • the weighting coefficient determination block 125 of FIG. 2 sends the calculated weight out to the weighting block 126 .
  • the weighting block 126 receives V 1 and V 2 from the band enhancement blocks A 123 and B 124 , weights W 1 and W 2 from the weighting coefficient determination block 125 , and compute the final edge component E from formula (4), which is sent out to the Y/C synthesis block 110 .
  • the direction judgment block 121 in the edge enhancement block 106 acquires the noise characteristics of the imaging system from the noise table 105 .
  • it is acceptable to acquire the characteristics of the imaging system other than the noise characteristic information in the noise table 105 for instance, information about the type and position of a pixel deficiency or information about sensitivity differences between pixels at which the same kind of color information is obtained.
  • the weighting block 126 determines the weight so that the more the quantity of noise, the more isotropic the band correction characteristics of weighed addition becomes, and send it out to the YC synthesis block 110 .
  • the weight is determined at the weighting block 126 the way the band correction characteristics of the results of weighted addition becomes smaller in the direction orthogonal to the direction along which there are successive pixel deficiencies.
  • the first embodiment corresponds to claims 1 to 11 , and claims 14 to 21 as well.
  • the digital camera 100 of FIG. 1 is tantamount to the image processor for implementing correction of the spatial frequency band of an entered image;
  • the edge enhancement means 106 is tantamount to a plurality of band correction means having mutually distinct band correction characteristics;
  • the direction judgment block 121 of FIG. 2 is tantamount to the feature quantity computation means for computing a given feature quantity in the neighborhood of each pixel of the entered image;
  • the YC synthesis block 110 is tantamount to the synthesis means for synthesizing the outputs of said plurality of band correction means on the basis of said feature quantity.
  • pixel value is tantamount to that the characteristics of the noise table 105 of FIG. 2 are entered in the edge enhancement block 106 .
  • said synthesis means implements synthesis on the basis the characteristics of the imaging system when said input image is taken” is tantamount to that the edge enhancement block 106 in which the characteristics of said noise table 105 are entered is connected to the YC synthesis block 110 .
  • RAW development software runs with RAW image data as input, said data corresponding to an image recorded in the image buffer 104 in FIG. 1 , producing a color image by implementing on software processing that is usually implemented within a digital camera.
  • step S 1 the RAW image data is read.
  • step S 2 an edge image is formed that includes an edge component of each pixel extracted from the RAW image data, and stored in the memory.
  • step S 3 one pixel of interest is chosen the RAW image data, and color interpolation processing is applied to that pixel of interest to compensate for a component missing from it.
  • step S 5 the RGB components of the pixel of interest are color transformed on a color transform matrix into a luminance component and a color difference component.
  • step S 6 the gain of the color difference component is controlled to enhance color saturation.
  • step S 7 first, the value corresponding to the pixel of interest in the edge image, figured out at step S 2 , is added to the luminance component. Then at step S 6 , it is synthesized with the color difference component with its color saturation enhanced at step S 6 , again back to the RGB value. Thus, the processing for the pixel of interest is over, and the final result is stored in the memory. Finally, at step S 8 whether or not unprocessed pixels remain is judged. If not, the final result of each pixel held in the memory is stored, and if so, the steps S 3 - 8 are resumed on. For the step S 2 here for the generation of the edge image, a further detailed sub-routine is shown in the form of a flowchart in FIG. 9 .
  • one of the pixels of interest in the RAW image data is read out at step S 10 , and then at step S 11 the neighborhood of the pixel of interest is read out.
  • the noise quantity N is figured out based on the ISO sensitivity information at the taking time and information abut the type of the digital camera out of which the RAW data have been produced, as is the case with the direction judgment block 121 .
  • the values of p, q and r are figured out from the neighboring pixel values as is the case with the direction judgment block 121
  • the edge components V 1 and V 2 are figured out as is the case with the band enhancement blocks A 123 and B 124 .
  • step S 15 W 1 and W 2 are figured out from p, q and r as is the case with the weighting coefficient determination block 125 , and at step S 16 , the final edge component of the pixel of interest is figured out as is the case with the weighting block 126 to hold it in the memory as the pixel value of the edge image at the pixel of interest. And at step S 17 , whether or not the processing for all the pixels are over is judged, and if there are unprocessed pixels, the steps S 10 - 17 are resumed on.
  • the step of reading the image data is tantamount to step S for reading the RAW data; the step of implementing a plurality of band corrections having mutually distinct band correction characteristics is tantamount to the step S 14 of figuring out the edge component; the step of figuring out a given feature quantity in the neighborhood of each pixel of said image data is tantamount to the steps S 12 and S 13 in FIG. 9 ; and the step of synthesizing the outputs from said multiple band corrections on the basis of said feature quantity is tantamount to the steps S 15 and S 16 .
  • FIGS. 10 to 15 are illustrative of the second embodiment.
  • FIG. 10 is illustrative of the architecture of the second embodiment
  • FIG. 11 is illustrative of the setup of an edge enhancement block 206
  • FIG. 12 is indicative of aberrations of an optical system 201
  • FIGS. 13 and 14 are illustrative of coefficients for compensating for MTF deterioration caused by aberrations
  • FIG. 15 is a flowchart of the computation of an edge component in the second embodiment.
  • a digital camera 200 has none of the noise table 105 in the first embodiment, and instead includes an aberration correction table 202 .
  • the optical system 201 is of a special type with aberration characteristics about its MTF deterioration being a superposition of horizontal and vertical deteriorations, not axially symmetric, as shown in FIG. 12 . Ovals in FIG. 12 are indicative of the degree and shape of optical blurs of point sources. Such characteristics are experienced in an optical system comprising, for instance, cylindrical lenses and special prisms.
  • the edge enhancement block 206 acts differently with the edge enhancement block 106 .
  • the aberration correction table 202 is connected to the optical system 201 and edge enhancement block 206 .
  • FIG. 11 is illustrative of details of the setup of the edge enhancement block 206 shown in FIG. 10 .
  • This block 206 is different from the edge enhancement block 106 in the first embodiment shown in FIG. 2 in that a direction judgment block 221 and a weighting coefficient determination block 225 operate differently.
  • the aberration correction table 202 is connected to the direction judgment block 221 and weighting coefficient determination block 225 . Connections between the direction judgment block 221 , tone control block 122 , band enhancement block A 123 , band enhancement block B 124 , weighting coefficient determination block 225 and weighting block 126 are the same as in the first embodiment of FIG. 2 .
  • the second embodiment operates the same way as does the first embodiment, except the edge enhancement block 206 that, too, operates the same way as in the first embodiment, except the direction judgment block 221 and weighting coefficient determination block 225 . Therefore, only the operation of the direction judgment block 221 and weighting coefficient determination block 225 is now explained.
  • the direction judgment block 221 the direction and structure of an edge in the neighborhood of the pixel of interest are estimated using information on the aberration correction table 202 , unlike the first embodiment.
  • correction coefficients Ch and Cv stored in the aberration correction table which are indicative of to what degree horizontal and vertical band corrections must be implemented so as to make compensation for MTF deterioration depending on the state of the optical system 201 at the taking time.
  • the data is a function about the coordinates (x, y) for the pixel of interest, one example of which is shown in FIGS. 13( a ) and 13 ( b ).
  • Ch that is the coefficient for making compensation for horizontal deterioration is a function with respect to x alone
  • Cv that is the coefficient for making compensation for vertical deterioration is a function with respect to y alone.
  • the coefficient Ch takes a minimum value of 1 at the bottom of FIG. 13( a ) and an increasing value at the top.
  • the correction coefficients Ch and Cv are the functions with respect to x and y alone, respectively, having the advantage of reducing the quantity of data on the aberration correction table.
  • these correction coefficients Ch and Cv are read out of the aberration correction table to implement computation using the following set of formulae (5) in place of formulae (2) in the first embodiment.
  • min(x, y, z) are the minimum values of x, y and z
  • max(x, y, z) are the maximum values of x, y and z
  • clip (x, a) is the function for limiting x to less than a.
  • ⁇ and ⁇ in the set of formulae (5) are again the constants as in the first embodiment, and Nc is a constant, which corresponds to the noise quantity N found using the noise table 105 in the first embodiment.
  • constants M 1 ( x, y ) and M 2 ( x, y ) indicative of to what degree the weight used at the weighting block 126 must be corrected so as to make compensation for MTF deterioration depending on the state of the optical system 201 at the taking time are read out of the aberration correction table 202 for computation, where (x, y) is the coordinates for the pixel of interest.
  • FIGS. 14( a ) and 14 ( b ) are illustrative of one example of the constant M 1 ( x, y ), and M 2 ( x, y ), respectively. As can be seen from FIG.
  • the constant M 1 ( x, y ) has symmetry about the center of the optical axis in the horizontal and vertical directions, and as can be seen from FIG. 14( b ), the constant M 2 ( x , y) has opposite signs about the center of the optical axis in the horizontal and vertical directions.
  • the weighting coefficient figured out from the set of formulae (3) is further corrected according to the following set of formulae (6) to send W 1 ′ and W 2 ′ out to the weighting block 126 .
  • W 1′ W 1* M 1+ W 2* M 2
  • W 2′ W 1* M 2+ W 2* M 1 (6)
  • the “band characteristics of the optical system” in claim 12 is tantamount to those of the optical system 201 stored in the aberration correction table 202 .
  • FIG. 15 is a flowchart of the second embodiment.
  • the subroutine flowchart for the edge component computation in the first embodiment shown in FIG. 9 is partly removed and corrected.
  • Like numerals in FIG. 9 are intended to indicate like steps, and different numerals are intended to indicate corrected steps. Only the corrected steps are now explained.
  • step 23 the step 13 of FIG.
  • step 9 is implemented with the changing of the set of formulae (2) to the set of formulae (5), and the correction coefficients Ch and Cv needed here are supposed to be recorded in the header of the RAW data.
  • step 15 of FIG. 9 is implemented with the changing of the set of formulae (3) to the set of formulae (6), and the correction coefficients M 1 and M 2 needed here are again supposed to be recorded in the head of the RAW data.
  • the flowchart relating to the second embodiment shown in FIG. 15 corresponds to the image processing program recited in claim 21 .
  • FIGS. 16 to 19 are illustrative of the third embodiment of the invention.
  • FIG. 16 is illustrative of the architecture of the third embodiment
  • FIG. 17 is illustrative of the setup of the edge enhancement block in FIG. 16
  • FIG. 18 is illustrative of a correction coefficient table adapting to the type of the optical LPF
  • FIG. 19 is a flowchart for how to figure out the edge component in the third embodiment.
  • the instant embodiment shown in FIG. 16 is applied to a digital camera, too.
  • the third embodiment overlaps the first embodiment; the component having the same action is given the same numeral as in the first embodiment, and will no more be explained.
  • a digital camera 300 is provided with an optical LPF in front of the primary color Bayer CCD 102 , and an optical LPF information ROM 302 is used in place of the noise table 105 in the first embodiment.
  • an edge enhancement block is indicated by reference numeral 306 because of acting differently from the edge enhancement block 106 in the first embodiment.
  • the optical LPF information ROM 302 is connected to the edge enhancement block 306 .
  • FIG. 17 is illustrative of details of the setup of that edge enhancement block 306 comprising a direction judgment block 321 and a weighting coefficient determination block 325 the operation of which is distinct from that in the edge enhancement block 106 in the first embodiment.
  • the optical LPF information ROM 302 is connected to both.
  • the optical LPF information ROM 302 takes hold of such a coefficient table as depicted in FIG. 18( d ), depending on the type of the optical LPF mounted in the digital camera.
  • FIG. 18( d ) is indicative of the relations of optical LPF types 1 , 2 and 3 vs. coefficients. There are six coefficients C 1 to C 6 involved, with C 1 and C 2 used at a direction judgment block 321 and C 3 to C 6 used at a weighting coefficient determination block 325 .
  • the third embodiment operates the same way as does the first embodiment, except the edge enhancement block 306 that, too, operates the same way as in the first embodiment, except the direction judgment block 321 and weighting coefficient determination block 325 . Therefore, only the operation of the direction judgment block 321 and weighting coefficient determination block 325 is now explained.
  • the direction judgment block 321 the direction and structure of an edge in the neighborhood of the pixel of interest are estimated using information on the optical LPF information ROM 302 , unlike the first embodiment.
  • the correction coefficients C 1 and C 2 about the type corresponding to the optical LPF 301 are read out of the optical LPF information ROM to implement calculation using the following set of formulae (7) instead of the set of formulae (2).
  • min(x, y, z) are the minimum values of x, y and z
  • max(x, y, z) are the maximum values of x, y and z
  • clip (x, a) is the function for limiting x to less than a.
  • ⁇ and ⁇ in the set of formulae (7) are again the constants as in the first embodiment, and Nc is a constant, which corresponds to the quantity of noise N found using the noise table 105 in the first embodiment.
  • the correction coefficients C 3 to C 6 for the type corresponding to the optical LPF 301 are again read out of the optical LPF information ROM 302 , and constants C 1 ′ and C 2 ′ indicative of to what degree the weight used at the weighting block 126 must be corrected so as to make compensation for MTF deterioration depending on the optical LPF 301 are figured out according to the following set of formulae (8).
  • C 3 and C 4 are the appropriate coefficients for horizontal and vertical band deteriorations when there is an edge in the neighborhood of the pixel of interest
  • C 5 and C 6 are the appropriate correction coefficients for horizontal and vertical band deteriorations when there is a stripe in the neighborhood of the pixel of interest.
  • the weighting coefficient figured out according to the set of formulae (3) is further corrected according to the following set of formulae (9) to send W 1 ′ and W 2 ′ out to the weighting block 126 .
  • W 1′ W 1* C 1′+ W 2* C 2′
  • FIG. 19 is a flowchart of the embodiment here.
  • the flowchart for the edge component computation in the first embodiment shown in FIG. 9 is partly removed and corrected.
  • Like numerals in FIG. 9 are intended to indicate like steps, and different numerals are intended to indicate corrected steps. Only the corrected steps are now explained.
  • the step 13 of FIG. 9 is intended to indicate like steps, and different numerals are intended to indicate corrected steps. Only the corrected steps are now explained.
  • step 9 is implemented with the changing of the set of formulae (2) to the set of formulae (7), and the correction coefficients C 1 and C 2 needed here—for the type of the optical LPF used for taking the RAW data—are recorded in the header of the RAW data.
  • step 15 of FIG. 9 is implemented with the changing of the set of formulae (3) to the sets of formulae (8) and (9), and the correction coefficients C 3 to C 6 needed here are again supposed to be recorded in the head of the RAW data.
  • the flowcharts relating to the first embodiment of FIG. 1 , the second embodiment of FIG. 15 and the third embodiment of FIG. 19 correspond to the image processing program recited in claim 21 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Studio Devices (AREA)
  • Picture Signal Circuits (AREA)

Abstract

The digital camera (100) has an image processor that corrects an input image for a spatial frequency band. The edge enhancement block (106) computes an edge component for band correction. A signal interpolated at color interpolation block (107) to make compensation for a color component missing from each pixel of a single-chip image is then converted at YC transform block (108) into a luminance signal and a color difference signal after tone correction, the luminance and color signals sent out to YC synthesis block (110) and color saturation correction block (109), respectively. Then, the color saturation correction block (109) controls the color saturation of the color difference signal to send it out to YC synthesis block (110). At the same time, the edge component computed at edge enhancement block (106) is sent out to YC synthesis block (110), too.

Description

    TECHNICAL ART
  • The present invention relates to an image processor and image processing program for applying the optimum edge enhancement to a subject.
  • BACKGROUND ART
  • Commonly for image processing systems mounted in digital cameras or the like, frequency-band enhancement processing is applied so as to give the final image sharpness. The simplest approach to the band enhancement processing is to rely on only one band enhancement filter having fixed characteristics. Still, many contrivances have been proposed so far in the art because of difficulty obtaining the best results depending on various subjects.
  • Further, JP(A)2002-344743 discloses an example of analyzing a half tone structure in an image to control an enhancement filter depending on a screen angle. Furthermore, Japanese Patent No. 2858530 sets forth a method for generating an edge component on the basis of the results of analysis in an edge direction such that there are much less fluctuations along the edge direction, and adding it to the present signal, thereby obtaining an image of better quality even at a decreasing S/N ratio.
  • However, such prior arts say nothing about contrivances made from the standpoints that when the optimum band enhancement is applied to a subject, it adapts well to various image capturing conditions, and it is implemented in much smaller circuit size.
  • In view of such problems with the prior art as described above, an object of the invention is to provide an image processor that is capable of applying the optimum band enhancement to a subject in much smaller circuit size, and an image processing program.
  • DISCLOSURE OF THE INVENTION
  • (1) According to the invention, that object is accomplishable by the provision of an image processor adapted to correct the spatial frequency band of an input image, characterized by comprising a plurality of band correction means having mutually distinct band correction characteristics, a feature quantity computation means adapted to figure out a feature quantity in the neighborhood of each pixel of the input image, and a synthesizing means adapted to synthesize the outputs of said plurality of band correction means on the basis of said feature quantity. According to this arrangement, it is possible to apply the optimum band enhancement to a subject with the state of an optical system, etc. reflected on it, and implement band enhancement processing in much smaller circuit size.
  • (2) According to the second invention, the invention (1) is further characterized in that said synthesis means is operable to figure out a weight for each of said band correction means on the basis of said feature quantity, and produce the result of weighting by adding said weight to the result of band correction by each of said band correction means. According to this arrangement, it is possible to given the optimum weight to the result of band correction depending on the structure of the subject.
  • (3) According to the invention (3), the invention (1) is further characterized in that said feature quantity computation means is operable to figure out the direction of an edge in said neighborhood as said given feature quantity. According to this arrangement, it is possible to figure out to which direction of horizontal, vertical and oblique directions the direction of the edge is close.
  • (4) According to the invention (4), the invention (1) is further characterized in that said feature quantity computation means is operable to figure out the probability of said neighborhood belonging to a given image class as said given feature quantity. According to this arrangement, it is possible to judge the feature of the structure in the neighborhood of the pixel of interest in terms of a numerical value.
  • (5) According to the invention (5), the invention (3) is further characterized in that said feature quantity computation means is operable to further figure out the reliability of the result of computation of the direction of said edge as said given feature quantity. According to this arrangement, it is possible to improve the reliability of the result of computation of the direction of the edge.
  • (6) According to the invention (6), the invention (4) is further characterized in that said given image class includes any one of an edge portion, a stripe portion, and a texture portion. According to this arrangement, it is possible to provide specific judgment of the feature of the structure in the neighborhood of the pixel of interest.
  • (7) According to the invention (7), the invention (1) is further characterized in that said feature quantity computation means is operable to figure out said feature quantity on the basis of the characteristics of the imaging system when said input image is taken. According to this arrangement, it is possible to figure out the feature quantity optimum for the subject.
  • (8) According to the invention (8), the invention (1) is further characterized in that said synthesis means is operable to implement synthesis on the basis of the characteristics of an imaging system when said input image is taken. According to this arrangement, it is possible to implement the tone correction optimum for the subject.
  • (9) According to the invention (9), the invention (7) is further characterized in that said characteristics of the imaging system are noise characteristics that provide a relation of the noise quantity vs. pixel value. According to this arrangement, it is possible to make noise correction on the basis of ISO sensitivity.
  • (10) According to the invention (10), the invention (7) or (8) is further characterized in that said characteristics of the imaging system are information about the type and position of a pixel deficiency. According to this arrangement, it is possible to figure out the feature quantity on the basis of information about the type and position of the image deficiency.
  • (11) According to the invention (11), the invention (7) or (8) is further characterized in that said characteristics of the imaging system are a sensitivity difference between pixels at which the same type color information is obtained.
  • (12) According to the invention (12), the invention is further characterized in that said characteristics of the imaging system are the spatial frequency characteristics of the optical system. According to this arrangement, it is possible to figure out the feature quantity on the basis of the spatial frequency characteristics of the optical system.
  • (13) According to the invention (13), the invention (12) is further characterized in that said characteristics of the imaging system are the spatial frequency characteristics of an optical LPF. According to this arrangement, it is possible to figure out the feature quantity on the basis of the characteristics of the optical LPF.
  • (14) According to the invention (14), the invention (9) is further characterized in that said feature quantity computation means is operable to lower the precision with which said direction of the edge is figured out as said noise quantity grows large. According to this arrangement it is possible to avoid mistaking a structure that should not be taken as an edge for an edge.
  • (15) According to the invention (15), the invention (9) is further characterized in that said feature quantity computation means is operable to lower the reliability of said direction of the edge as said noise quantity grows large. According to this arrangement, it is possible to prevent a failure in band correction processing when the noise level is high.
  • (16) According to the invention (16), the invention (9) is further characterized in that said synthesis means is operable to determine said weight such that the more said noise quantity, the more likely the band correction characteristics of said weighted addition is to grow isotropic. According to this arrangement, it is possible to stave off a failure in band correction processing when the noise level is high.
  • (17) According to the invention (17), the invention (9) is further characterized in that said synthesis means is operable to determine said weight such that the band correction characteristics of said result of weighed addition become small in a direction orthogonal to a direction along which there are successive pixel deficiencies. According to this arrangement, it is possible to stave off a failure in band correction processing when there is an image deficiency.
  • (18) According to the invention (18), the invention (1) is further characterized in that there are two band correction means, each of which is a two-dimensional linear filter having a coefficient of point symmetry. According to this arrangement wherein the filter coefficient is of high symmetry, it is possible to reduce the number of computations at the band enhancement block and, hence, diminish the circuit size involved.
  • (19) According to the invention (19), the invention (1) is further characterized in that one of said filters is such that the band correction characteristics in a particular direction have a negative value. According to this arrangement wherein the coefficient value often takes a value of 0, it is possible to make sure a further reduction in the number of computations at the band enhancement block and, hence, a much diminished circuit size.
  • (20) According to the invention (20), the invention (1) is further characterized in that said band correction means is operable to apply given tone transform to said input image, and then implement band correction so that said feature quantity computation means can figure out said feature quantity with none of said given tone transform. According to this arrangement, it is possible to figure out said feature quantity with much higher precision.
  • (21) According to the invention (21), there is an image processing program provided to correct image data for a spatial frequency band, which lets a computer implement steps of reading image data, implementing a plurality of band corrections having mutually distinct band correction characteristics, figuring out a given feature quantity in the neighborhood of each pixel of said image data, and synthesizing the outputs of said plurality of band corrections on the basis of said feature quantity. According to this arrangement, it is possible to run the optimum band enhancement processing for the read image on software.
  • According to the invention, it is possible to provide an image processor and image processing program capable of applying the optimum band enhancement to a subject according to ISO sensitivity and the state of an optical system, with much smaller circuit size.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is illustrative of the architecture of the first embodiment.
  • FIG. 2 is illustrative of the setup of the edge enhancement block in FIG. 1.
  • FIG. 3 is illustrative of the operation of the direction judgment block in FIG. 1.
  • FIG. 4 is illustrative of the operation of the direction judgment block in FIG. 1.
  • FIG. 5 is illustrative of the noise table in FIG. 1.
  • FIG. 6 is illustrative of the enhancement characteristics of the band enhancement block in FIG. 1.
  • FIG. 7 is illustrative of one exemplary weight characteristics.
  • FIG. 8 is flowchart representative of the processing steps in the first embodiment.
  • FIG. 9 is a flowchart of how to compute an edge component in FIG. 8.
  • FIG. 10 is illustrative of the architecture of the second embodiment.
  • FIG. 11 is illustrative of the setup of the edge enhancement block in FIG. 10.
  • FIG. 12 is illustrative of the aberrations of the optical system.
  • FIG. 13 is illustrative of the coefficient for compensating for MTF deterioration due to aberrations.
  • FIG. 14 is illustrative of the coefficient for compensating for MTF deterioration due to aberrations.
  • FIG. 15 is a flowchart of how to figure out an edge component in the second embodiment.
  • FIG. 16 is illustrative of the architecture of the third embodiment.
  • FIG. 17 is illustrative of the setup of the edge enhancement block in FIG. 16.
  • FIG. 18 is illustrative of a correction coefficient table corresponding to the type of optical LPF.
  • FIG. 19 is a flowchart of how to figure out an edge component in the third embodiment.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • Some embodiments of the invention are now explained with reference to the accompanying drawings. FIGS. 1 to 9 are illustrative of the first embodiment of the invention. FIG. 1 is illustrative of the architecture of the first embodiment; FIG. 2 is illustrative of the setup of the edge enhancement block in FIG. 1; FIGS. 3 and 4 are illustrative of the operation of the direction judgment block in FIG. 1; FIG. 5 is illustrative of the noise table in FIG. 1; FIG. 6 is illustrative of the enhancement characteristics of the band enhancement block in FIG. 1; FIG. 7 is illustrative of one exemplary weight; FIG. 8 is a flowchart of the RAW development software in the first embodiment; and FIG. 9 is a flowchart of how to figure out the edge component in FIG. 8.
  • The architecture of the first embodiment according to the invention is shown in FIG. 1. The embodiment here is a digital camera shown generally by 100 that is built up of an optical system 101, a primary color Bayer CCD 102, an A/D converter block 103, an image buffer 104, a noise table 105, an edge enhancement block 106, a color interpolation block 107, a YC transform block 108, a color saturation correction block 109, an YC synthesis block 110, a recording block 111 and a control block 112. The primary color Bayer CCD 102 is connected to the image buffer 107 by way of the A/D converter block 103, and the image buffer 104 is connected to the recording block 111 by way of the color interpolation block 107, YC transform block 108, color saturation correction block 109 and YC synthesis block 110 in order. The image buffer 104 is also connected to the YC synthesis block 110 by way of the edge enhancement block 106. The YC transform block 108 is also connected directly to the YC synthesis block 110. Although not illustrated, the control block 112 is bi-directionally connected to the respective blocks.
  • FIG. 2 is illustrative of the details of the setup of the edge enhancement block 106 in FIG. 1. The edge enhancement block 106 is built up of a direction judgment block 121, a tone control block 122, a band enhancement block A123, a band enhancement block B124, a weighting coefficient determination block 125 and a weighting block 126. The image buffer 104 is connected to the direction judgment block 121, and to the band enhancement blocks A123 and B124 as well by way of the tone control block 122. The band enhancement blocks A123 and B124 are each connected to the weighting block 126. The direction judgment block 121 is connected to the weighting block 126 by way of the weighting coefficient determination block 125.
  • Then, the operation of the digital camera 100 is explained. As the shutter (not shown) is pressed down, it causes an optical image formed through the optical system 101 to be photoelectrically converted at the primary color Bayer CCD 102 and recorded as an image signal in the image buffer 104 by way of the A/D converter block 103. For the digital image signal recorded in the image buffer 104, the edge component for band correction is first computed at the edge enhancement block 106, and at the color interpolation block 107, color components missing at each pixel are compensated for by interpolation of the recorded image. The signal interpolated at the color interpolation block 107 is then transformed at the YC transform block 108 into a luminance signal and a color difference signal after tone correction, the luminance signal sent out to the YS synthesis block 110 and the color difference signal to the color saturation correction block 109. Thereafter, the color saturation of the color difference signal is controlled at the color saturation correction block 109, and sent out to the YC synthesis block 110. At the same time, the edge component computed at the edge enhancement block 106, too, is sent out to the YC synthesis block 110.
  • As the YC synthesis block 110 receives the color difference signal with its color saturation controlled, the luminance signal and the edge component, it first adds the edge component to the luminance signal to create a luminance signal with its band corrected. And there is known processing implemented to combine that luminance signal with the color signal for conversion into an RGB signal, and the result is sent out to the recording block 111. At the recording block 111, the entered signal is compressed and recorded in a recording medium. Thus, the operation of the digital camera 100 is over.
  • The operation of the edge enhancement block 106—the feature of the first embodiment according to the invention—is now explained at great length. At the edge enhancement block 106, digital image signals within the image buffer 104 are processed; that is, the direction and structure of an edge near each pixel are first estimated at the direction judgment block 121. FIG. 3 is illustrative of an exemplary pixel arrangement and filter used for the estimation at the direction judgment block of the edge direction and structure in the neighborhood of each pixel. FIG. 3( a) is illustrative of a pixel arrangement when the neighborhood center of the pixel of interest is not G, and FIG. 3( b) is illustrative of a pixel arrangement when the neighborhood center of the pixel of interest is G. On FIG. 3( a), a filter Dh for measuring the magnitude of a horizontal fluctuation is also shown. On FIG. 3( b), a filter Dv for measuring the magnitude of a vertical fluctuation is also shown.
  • First, the direction judgment block 121 reads only the G component out of the 5×5 neighborhood of each pixel, as shown in FIG. 3( a) (the position of each blank takes a value of 0). And two such filters Dh and Dv are applied to the G component to examine the neighbors' edge structures. In this case, the filter Dh is primarily applied to the position of P1, P2, and P3 shown in FIG. 3( a), while the filter Dv is primarily applied to the position of P4, P2, and P5 shown in FIG. 3( a). As a result, three measurements dh1, dh2, dh3 and dv1, dv2, dv3 are obtained for each direction. For instance, when the G pixel is in the neighborhood center, a set of specific computation formulae is given by (1).

  • dh1=|2*(G3−G4)+2*(G8−G9)| (Result at P1)

  • dh2=|G1− G 2+2*(G7−G7)+G11−G12| (Result at P2)

  • dh3=|2*(G4−G6)+2*(G9−G10)| (Result at P3)

  • dv1=|2*(G1−G6)+2*(G2−G7)| (Result at P4)

  • dv2=|G3− G 8+2*(G4−G9)+G5−G10)| (Result at P2)

  • dv3=|2*(G6−G11)+2*(G7−G12)| (Result at P5)  (1)
  • where |x| stands for the absolute value of x.
  • Further, at the direction judgment block 121, the average value Ag of the G components in the neighborhood of the pixel of interest is at the same time found to estimate the average quantity of noise in the neighborhood on the basis of the noise characteristic information loaded in the noise table 105. In the noise table 105, the characteristics of noise occurring at the primary color Bayer CCD 102 are stored in dependence on each ISO sensitivity at the taking time, as shown in FIG. 5. Such characteristics, indicative of the relation of noise quantity vs. pixel value, are obtained by measurement beforehand. At the direction judgment block 121, a table Nc indicative of the relation of pixel value vs. noise quantity at the present ISO sensitivity is read out of these characteristics to find a noise quantity N in the neighborhood from N=Nc (Ag) with Ag as the index. And the following set of formulae (2) is used to compute from the measurements dh1 to dv3 and N and index r indicative of the direction of the structure in the neighborhood of the pixel of interest, an index q indicative of the type of the structure of the edge or the like, and reliability p about the estimated direction of the structure of the edge or the like.

  • dh=(dh1+dh2+dh3)/3

  • dv=(dv1+dv2+dv3)/3

  • qh={min(dh1,dh2,dh3)+N}/{max(dh1,dh2,dh3)+N}

  • qv={min(dv1,dv2,dv3)+N}/{max(dv1,dv2,dv3)+N}

  • r=(dv−dh)/{2*(dh+dv)+α*N}+0.5

  • p=clip{max(dh,dv)/(N*β),1}

  • If q=dh>dv,qh, and if not,qv.  (2)
  • where min(x, y, z) are the minimum values of x, y and z; max(x, y, z) are the maximum values of x, y and z; and clip (x, a) is the function for limiting x to less than a.
  • In formulae (2), α and β are each a given constant, and dh and dv are indicative of the average quantities of horizontal and vertical pixel value fluctuations in the neighborhood. FIG. 4( a) is illustrative of the aforesaid index r, and FIG. 4( b) is illustrative of the aforesaid index q. As the structure of the edge or the like in the neighborhood gets close to horizontal as shown in FIG. 4( a)-(3, the index r takes the value of 0; as it gets close to vertical as shown in FIG. 4( a)-(1), the index r takes the value of 1; and as it gets close to an oblique 45° as shown in FIG. 4( a)-(2), the index r takes the value of 0.5.
  • If the structure within the neighborhood of the pixel interest gets close to an edge or line as shown in FIGS. 4( b)-(5) and 4(b)-(7), the index q takes the value of 0, and if it gets close to a flat portion or stripe as shown in FIGS. 4( b)-(4) and 4(b)-(6), the index q takes the value of 1. Here, when the noise quantity N is large and the contrast of the structure in the neighborhood is weak, the index r is going to approach 0, the index q is going to approach 1, and the index p is going to approach 0. Thus, the index q has a function of figuring out the probability of the neighborhood of the pixel of interest belonging to a given image class such as edges or stripes, and with the index q, the feature of the structure in the neighborhood of the pixel of interest could be judged in terms of figures. Note here that the image class may include, in addition to the edges or stripes, a texture portion. In the embodiment here, whether or not the feature of the structure in the neighborhood of the pixel of interest is an edge, a stripe or a texture portion could thus be specifically judged.
  • While keeping in parallel with such processing at the direction judgment block 121, two kinds of edge components for band enhancement are computed at the tone control block 122, band enhancement block A123 and band enhancement block B124. First, the tone control block 122 reads a G component pixel value in the neighborhood of 5×5 shown in FIG. 3( a), as is the case with the direction judgment block 121, applying tone transform to each pixel value Gi (i is 1 to 12 or 13) by the tone transform table T to implement computation for the following formula Gi′=T(Gi) where i is 1 to 12 or 13 with the result of computation is sent out to the band enhancement blocks A123 and B124. The band enhancement blocks A123 and B124, in which the 5×5 linear filter coefficients F1 and F2 are loaded, apply them to the input form the tone control block 122 to compute edge components V1 and V2. FIG. 6 is illustrative of an example of the filter coefficients F1 and F2 and band characteristics. More specifically, FIG. 6( a) is indicative of the contour plot of the frequency characteristics of the linear filter coefficient F1, and FIG. 6( b) is indicative of the contour plot of the frequency characteristics of the linear filter coefficient F2, with horizontal frequencies (Nyquist=1) in the transverse direction and vertical frequencies in the longitudinal direction. Gain is also set in the direction coming out of the paper.
  • As can be seen from the frequency characteristics and coefficients of FIGS. 6( a) and 6(b), F1 and F2 have the following features: (1) each coefficient has point symmetry with respect to the center of the filter, (2) the frequency characteristics of F1 are symmetric with respect to both directions, horizontal and vertical, and they may have the same response to any arbitrary direction, and (3) the frequency characteristics of F2 have opposite signs in the horizontal and vertical directions. The filter coefficient has all diagonal components of 0. Thus, F1 and F2 are each configured in the form of a two-dimensional linear filter having a coefficient of point symmetry, and F2 is configured such that band correction characteristics in a particular direction take a negative value. With these features, F1 makes sure very high symmetry about the filter coefficient, and F2 often takes the coefficient value of 0. The number of multiplications and additions at the band enhancement blocks A123 and B124 is much less than that for a general edge enhancement filter, making sure diminished circuit size.
  • As the processing so far is over and there are the results V1, V2, p, q and r found from formulae (1) and (2), V1 and V2 are sent out to the weighting block 126, and p, q and r to the weighting determination block 125. At the weighting determination block 125, weights W1 and W2 for the weighted sum of V1 and V2 at the weighting block 126 are computed. A set of specific calculation formulae is given by the following (3).

  • W1=1−p+p*{q*s1(r)+(1−q)*t1(r)}

  • W2=p*{q*s2(r)+(1−q)*t2(r)}  (3)
  • In formulae (3), s1, and s2 is the weight optimum for the subject that is an edge or line, and t1, and t2 is the weight optimum for the subject that is a stripe form. Each weight is experimentally determined beforehand in such a way as to become optimum for the subject. FIG. 7 is illustrative of an example of s1 and s2, and t1 and t2. More specifically, FIG. 7( a) is indicative of the weight characteristics for an edge, and FIG. 7( b) is indicative of the weight characteristics for a stripe. A solid line and a broken line in FIG. 7( a) are indicative of t1(r) and t2(r), respectively. Both the characteristics of FIGS. 7( a) and 7(b) have a weight value varying with an estimated value r about the direction of the structure in the neighborhood of the pixel of interest, and s2, and t2 in particular takes a larger negative value as r=0 or the edge in the neighborhood gets closer and closer to horizontal, and each takes a larger positive value as r=1 or the edge in the neighborhood gets closer and closer to vertical. As can be seen from the frequency characteristics of F1 and F2 shown in FIG. 6, consequently, there is the optimum edge enhancement feasible the way the closer the edge in the neighborhood of the pixel of interest gets to horizontal, the more the vertical frequency is enhanced, and the closer the edge in the neighborhood of the pixel of interest gets to vertical, the more the horizontal frequency is enhanced.
  • According to formulae (3), the more likely the subject is to be an edge, the closer W1, and W2 gets to s1, and s2, respectively, and the more likely the subject is to be a stripe, the closer W1, and W2 gets to t1, and t2, respectively. And the surer the judgment of whether the subject is a stripe or an edge, the closer the weight calculated gets to the weight for each subject.
  • The weighting coefficient determination block 125 of FIG. 2 sends the calculated weight out to the weighting block 126. The weighting block 126 receives V1 and V2 from the band enhancement blocks A123 and B124, weights W1 and W2 from the weighting coefficient determination block 125, and compute the final edge component E from formula (4), which is sent out to the Y/C synthesis block 110.

  • E=W1*V1+W2*V2  (4)
  • In the first embodiment shown in FIGS. 1 and 2, the direction judgment block 121 in the edge enhancement block 106 acquires the noise characteristics of the imaging system from the noise table 105. For a modification to the first embodiment of the invention, it is acceptable to acquire the characteristics of the imaging system other than the noise characteristic information in the noise table 105, for instance, information about the type and position of a pixel deficiency or information about sensitivity differences between pixels at which the same kind of color information is obtained. The weighting block 126 determines the weight so that the more the quantity of noise, the more isotropic the band correction characteristics of weighed addition becomes, and send it out to the YC synthesis block 110. For a modification to the first embodiment according to the invention, it is contemplated that the weight is determined at the weighting block 126 the way the band correction characteristics of the results of weighted addition becomes smaller in the direction orthogonal to the direction along which there are successive pixel deficiencies.
  • The first embodiment corresponds to claims 1 to 11, and claims 14 to 21 as well. Referring here to how the first embodiment corresponds to the means of claim 1, the digital camera 100 of FIG. 1 is tantamount to the image processor for implementing correction of the spatial frequency band of an entered image; the edge enhancement means 106 is tantamount to a plurality of band correction means having mutually distinct band correction characteristics; the direction judgment block 121 of FIG. 2 is tantamount to the feature quantity computation means for computing a given feature quantity in the neighborhood of each pixel of the entered image; and the YC synthesis block 110 is tantamount to the synthesis means for synthesizing the outputs of said plurality of band correction means on the basis of said feature quantity. The limitation in claim 2 that “said synthesis means computes a weight for said each band correction means on the basis of said feature quantity, and produces the result of weighted addition by said weight from the result of band correction of said each band correction means” is tantamount to that the YC synthesis block 110 comprises the weighting block 126 of FIG. 2.
  • The limitation in claim 3 that “said feature quantity computation means computes the direction of an edge in said neighborhood as said given feature quantity” is tantamount to that the direction judgment block 121 of FIG. 2 computes the index r indicative of the directionality of the pixel of interest according to formulae (2). The limitation in claim 4 that “said feature quantity computation means computes the probability of said neighborhood belonging to a given image class as said given feature quantity” is tantamount to that the direction judgment block 121 of FIG. 2 computes the index q indicative of the type of the structure according to formulae (2). Further, the limitation in claim 5 that “said feature quantity computation means further computes the reliability of the result of computation in the direction of said edge as said given feature quantity” is tantamount to that the direction judgment block 121 of FIG. 2 computes the reliability p about the directionality of the structure according to formulae (2).
  • The limitation in claim 6 that “said given image class includes any of an edge portion, a stripe portion, and a texture portion” is tantamount to that the direction judgment block 121 of FIG. 2 corresponds to the pattern of FIGS. 4( b)-(4), 4(b)-(5), 4(b)-(6) and 4(b)-(7) depending on the value of the index q indicative of the type of the structure according to formulae (2). The limitation in claim 7 that “said feature quantity computation means computes said feature quantity on the basis of the characteristics of the imaging system when the said entered image is taken”, and the limitation in claim 9 that “said characteristics of the imaging system are noise characteristics giving the relations of noise quantity vs. pixel value” is tantamount to that the characteristics of the noise table 105 of FIG. 2 are entered in the edge enhancement block 106. Further, the limitation in claim 8 that “said synthesis means implements synthesis on the basis the characteristics of the imaging system when said input image is taken” is tantamount to that the edge enhancement block 106 in which the characteristics of said noise table 105 are entered is connected to the YC synthesis block 110.
  • The limitation in claim 14 that “said feature quantity computation means lowers precision with which the direction of said edge is computed as said noise quantity grows large” is tantamount to that the index r in formulae (2) is divided by the noise quantity N, and the limitation in claim 15 that “said feature quantity computation means lowers the reliability of the direction of said edge as said noise quantity grows large” is tantamount to that the index p in formulae (2) is divided by the noise quantity N. Further, the limitation in claim 16 that “said synthesis means determines said weight such that the more said noise quantity, the more isotropic the band correction characteristics of said result of weighted addition grows” is tantamount to that the weight W1, and W2 is computed from formulae (3), viz., with the parameter of said p.
  • The limitation in claim 18 that “there are two such band correction means used, each being a two-dimensional linear filter having a coefficient of point symmetry” is tantamount to the linear filters F1 and F2 shown in FIGS. 5(a) and 5(b). The limitation in claim 19 that “one of said filters is such that the band correction characteristics in a particular direction have a negative value” is tantamount to the liner filter F2 shown in FIG. 5( b). The limitation in claim 20 that “said band correction means implements band correction after given tone transform is applied to said input images, and said feature quantity computation means computes said feature quantity without implementing said given tone transform” is tantamount to that one of the outputs of the image buffer 104 of FIG. 2 is entered in the band enhancement blocks A and B through the tone control block 122, and the other is entered in the direction judgment block 121.
  • While the embodiments of the invention here have been described, it is contemplated that similar processing may be implemented even on software, let alone on hardware such as digital cameras. As that example, there is a flowchart of RAW development software shown. This software runs with RAW image data as input, said data corresponding to an image recorded in the image buffer 104 in FIG. 1, producing a color image by implementing on software processing that is usually implemented within a digital camera.
  • Referring more specifically to the flowchart of FIG. 8, at step S1 the RAW image data is read. At step S2, an edge image is formed that includes an edge component of each pixel extracted from the RAW image data, and stored in the memory. Then at step S3, one pixel of interest is chosen the RAW image data, and color interpolation processing is applied to that pixel of interest to compensate for a component missing from it. At step S5, the RGB components of the pixel of interest are color transformed on a color transform matrix into a luminance component and a color difference component. At step S6, the gain of the color difference component is controlled to enhance color saturation. Then at step S7, first, the value corresponding to the pixel of interest in the edge image, figured out at step S2, is added to the luminance component. Then at step S6, it is synthesized with the color difference component with its color saturation enhanced at step S6, again back to the RGB value. Thus, the processing for the pixel of interest is over, and the final result is stored in the memory. Finally, at step S8 whether or not unprocessed pixels remain is judged. If not, the final result of each pixel held in the memory is stored, and if so, the steps S3-8 are resumed on. For the step S2 here for the generation of the edge image, a further detailed sub-routine is shown in the form of a flowchart in FIG. 9.
  • In that flowchart, one of the pixels of interest in the RAW image data is read out at step S10, and then at step S11 the neighborhood of the pixel of interest is read out. At step S12, the noise quantity N is figured out based on the ISO sensitivity information at the taking time and information abut the type of the digital camera out of which the RAW data have been produced, as is the case with the direction judgment block 121. At step S13, the values of p, q and r are figured out from the neighboring pixel values as is the case with the direction judgment block 121, and at step S14, the edge components V1 and V2 are figured out as is the case with the band enhancement blocks A123 and B124. At step S15, W1 and W2 are figured out from p, q and r as is the case with the weighting coefficient determination block 125, and at step S16, the final edge component of the pixel of interest is figured out as is the case with the weighting block 126 to hold it in the memory as the pixel value of the edge image at the pixel of interest. And at step S17, whether or not the processing for all the pixels are over is judged, and if there are unprocessed pixels, the steps S10-17 are resumed on.
  • In the flowchart of FIGS. 8 and 9 about the image processing program of claim 21 for correcting the image data for spatial frequency bands, the step of reading the image data is tantamount to step S for reading the RAW data; the step of implementing a plurality of band corrections having mutually distinct band correction characteristics is tantamount to the step S14 of figuring out the edge component; the step of figuring out a given feature quantity in the neighborhood of each pixel of said image data is tantamount to the steps S12 and S13 in FIG. 9; and the step of synthesizing the outputs from said multiple band corrections on the basis of said feature quantity is tantamount to the steps S15 and S16.
  • FIGS. 10 to 15 are illustrative of the second embodiment. FIG. 10 is illustrative of the architecture of the second embodiment; FIG. 11 is illustrative of the setup of an edge enhancement block 206; FIG. 12 is indicative of aberrations of an optical system 201; FIGS. 13 and 14 are illustrative of coefficients for compensating for MTF deterioration caused by aberrations; and FIG. 15 is a flowchart of the computation of an edge component in the second embodiment.
  • The second embodiment shown in FIG. 10, again applied to a digital camera, overlaps the first embodiment. In what follows, like components having the same action are indicated by like numerals in the first embodiment, and so will not be explained anymore.
  • In the second embodiment shown in FIG. 10, a digital camera 200 has none of the noise table 105 in the first embodiment, and instead includes an aberration correction table 202. The optical system 201 is of a special type with aberration characteristics about its MTF deterioration being a superposition of horizontal and vertical deteriorations, not axially symmetric, as shown in FIG. 12. Ovals in FIG. 12 are indicative of the degree and shape of optical blurs of point sources. Such characteristics are experienced in an optical system comprising, for instance, cylindrical lenses and special prisms. Further, the edge enhancement block 206 acts differently with the edge enhancement block 106. The aberration correction table 202 is connected to the optical system 201 and edge enhancement block 206.
  • FIG. 11 is illustrative of details of the setup of the edge enhancement block 206 shown in FIG. 10. This block 206 is different from the edge enhancement block 106 in the first embodiment shown in FIG. 2 in that a direction judgment block 221 and a weighting coefficient determination block 225 operate differently. And the aberration correction table 202 is connected to the direction judgment block 221 and weighting coefficient determination block 225. Connections between the direction judgment block 221, tone control block 122, band enhancement block A123, band enhancement block B124, weighting coefficient determination block 225 and weighting block 126 are the same as in the first embodiment of FIG. 2.
  • How the second embodiment operates is now explained. As the shutter (not show) is pressed down, the second embodiment operates the same way as does the first embodiment, except the edge enhancement block 206 that, too, operates the same way as in the first embodiment, except the direction judgment block 221 and weighting coefficient determination block 225. Therefore, only the operation of the direction judgment block 221 and weighting coefficient determination block 225 is now explained. At the direction judgment block 221, the direction and structure of an edge in the neighborhood of the pixel of interest are estimated using information on the aberration correction table 202, unlike the first embodiment. There are correction coefficients Ch and Cv stored in the aberration correction table, which are indicative of to what degree horizontal and vertical band corrections must be implemented so as to make compensation for MTF deterioration depending on the state of the optical system 201 at the taking time. The data is a function about the coordinates (x, y) for the pixel of interest, one example of which is shown in FIGS. 13( a) and 13(b).
  • In the embodiment here where the setup of the optical system 201 is unique, Ch that is the coefficient for making compensation for horizontal deterioration is a function with respect to x alone, and Cv that is the coefficient for making compensation for vertical deterioration is a function with respect to y alone. As shown in FIG. 13( a), the coefficient Ch takes a minimum value of 1 at the bottom of FIG. 13( a) and an increasing value at the top. Thus, the correction coefficients Ch and Cv are the functions with respect to x and y alone, respectively, having the advantage of reducing the quantity of data on the aberration correction table. At the direction judgment block 221, these correction coefficients Ch and Cv are read out of the aberration correction table to implement computation using the following set of formulae (5) in place of formulae (2) in the first embodiment.

  • qh={min((dh1,dh2,dh3)+Nc}/{max(dh1,dh2,dh3)+Nc}

  • qv={min((dv1,dv2,dv3)+Nc}/{max(dv1,dv2,dv3)+Nc}

  • dh′=Ch(x)*dh,dvi′=Cv(y)*dv

  • r=(dv′−dh)/{2*(dh′+dv′)+α*Nc}+0.5

  • p=clip{max(dh′,dv′)/(β*Nc),1}

  • If q=dh′>dv′,qh, and if not,qv.  (5)
  • where min(x, y, z) are the minimum values of x, y and z; max(x, y, z) are the maximum values of x, y and z; and clip (x, a) is the function for limiting x to less than a.
  • α and β in the set of formulae (5) are again the constants as in the first embodiment, and Nc is a constant, which corresponds to the noise quantity N found using the noise table 105 in the first embodiment. With the instant embodiment wherein the feature quantities p, q and r are figured out, it is possible to make estimation of direction and structure with correction of the influence of MTF deterioration caused by aberrations. At the weighting coefficient determination block 225, constants M1(x, y) and M2(x, y) indicative of to what degree the weight used at the weighting block 126 must be corrected so as to make compensation for MTF deterioration depending on the state of the optical system 201 at the taking time are read out of the aberration correction table 202 for computation, where (x, y) is the coordinates for the pixel of interest. FIGS. 14( a) and 14(b) are illustrative of one example of the constant M1(x, y), and M2(x, y), respectively. As can be seen from FIG. 14( a), the constant M1(x, y) has symmetry about the center of the optical axis in the horizontal and vertical directions, and as can be seen from FIG. 14( b), the constant M2(x, y) has opposite signs about the center of the optical axis in the horizontal and vertical directions. And at the weighting coefficient determination block 225, the weighting coefficient figured out from the set of formulae (3) is further corrected according to the following set of formulae (6) to send W1′ and W2′ out to the weighting block 126.

  • W1′=W1*M1+W2*M2

  • W2′=W1*M2+W2*M1  (6)
  • By computation according to the set of formulae (6), it is possible to determine the weight in such a way as to allow the final edge enhancement characteristics to make compensation for the MTF deterioration caused by aberrations, and be the best suited for the direction and structure of the edge in the neighborhood of the pixel of interest as well.
  • As is the case with the first embodiment, it is claims 1 to 8, 14 to 16 and 18 to 20 that correspond to the second embodiment.
  • The “band characteristics of the optical system” in claim 12 is tantamount to those of the optical system 201 stored in the aberration correction table 202.
  • While the embodiment here is described as applied to a digital camera, it is contemplated that similar processing may run on software, too. More specifically, in the flowchart of the RAW development software in the first embodiment shown in FIG. 8, the edge computation step S2 is changed. FIG. 15 is a flowchart of the second embodiment. In this flowchart, the subroutine flowchart for the edge component computation in the first embodiment shown in FIG. 9 is partly removed and corrected. Like numerals in FIG. 9 are intended to indicate like steps, and different numerals are intended to indicate corrected steps. Only the corrected steps are now explained. At step 23, the step 13 of FIG. 9 is implemented with the changing of the set of formulae (2) to the set of formulae (5), and the correction coefficients Ch and Cv needed here are supposed to be recorded in the header of the RAW data. At step 25, the step 15 of FIG. 9 is implemented with the changing of the set of formulae (3) to the set of formulae (6), and the correction coefficients M1 and M2 needed here are again supposed to be recorded in the head of the RAW data. The flowchart relating to the second embodiment shown in FIG. 15 corresponds to the image processing program recited in claim 21.
  • FIGS. 16 to 19 are illustrative of the third embodiment of the invention; FIG. 16 is illustrative of the architecture of the third embodiment, FIG. 17 is illustrative of the setup of the edge enhancement block in FIG. 16, FIG. 18 is illustrative of a correction coefficient table adapting to the type of the optical LPF, and FIG. 19 is a flowchart for how to figure out the edge component in the third embodiment. The instant embodiment shown in FIG. 16 is applied to a digital camera, too. The third embodiment overlaps the first embodiment; the component having the same action is given the same numeral as in the first embodiment, and will no more be explained.
  • In the third embodiment shown in FIG. 16, a digital camera 300 is provided with an optical LPF in front of the primary color Bayer CCD 102, and an optical LPF information ROM 302 is used in place of the noise table 105 in the first embodiment. Further, since an edge enhancement block is indicated by reference numeral 306 because of acting differently from the edge enhancement block 106 in the first embodiment. The optical LPF information ROM 302 is connected to the edge enhancement block 306. FIG. 17 is illustrative of details of the setup of that edge enhancement block 306 comprising a direction judgment block 321 and a weighting coefficient determination block 325 the operation of which is distinct from that in the edge enhancement block 106 in the first embodiment. And the optical LPF information ROM 302 is connected to both.
  • There are correction coefficients stored in the optical LPF information ROM 302 to make compensation for horizontal and vertical deteriorations caused by the optical LPF 301. The optical LPF has distinct band deterioration characteristics in the horizontal and vertical directions, as can be seen from the frequency characteristic types 1, 2 and 3 of FIGS. 18( a), 18(b) and 18(c), although varying with its setup. FIGS. 18( a), 18(b) and 18(c) are the contour plot of the frequency response of optical LPT in which horizontal frequencies (Nyquist=1) are shown in the transverse direction and vertical frequencies (Nyquist=1) in the longitudinal direction. Gain is also set in the direction coming out of the paper. Thus, because the optical LPF has distinct band deterioration characteristics in the horizontal and vertical directions, it is necessary to control, depending on the type of the optical LPF, the judgment of the direction of edges and processing for telling edges from stripes at the edge enhancement block as well as edge enhancement characteristics. To this end, the optical LPF information ROM 302 takes hold of such a coefficient table as depicted in FIG. 18( d), depending on the type of the optical LPF mounted in the digital camera. FIG. 18( d) is indicative of the relations of optical LPF types 1, 2 and 3 vs. coefficients. There are six coefficients C1 to C6 involved, with C1 and C2 used at a direction judgment block 321 and C3 to C6 used at a weighting coefficient determination block 325.
  • How the third embodiment operates is now explained. As the shutter (not show) is pressed down, the third embodiment operates the same way as does the first embodiment, except the edge enhancement block 306 that, too, operates the same way as in the first embodiment, except the direction judgment block 321 and weighting coefficient determination block 325. Therefore, only the operation of the direction judgment block 321 and weighting coefficient determination block 325 is now explained. At the direction judgment block 321, the direction and structure of an edge in the neighborhood of the pixel of interest are estimated using information on the optical LPF information ROM 302, unlike the first embodiment. At the direction judgment block 321, the correction coefficients C1 and C2 about the type corresponding to the optical LPF 301 are read out of the optical LPF information ROM to implement calculation using the following set of formulae (7) instead of the set of formulae (2).

  • qh={min((dh1,dh2,dh3)+Nc}/{max(dh1,dh2,dh3)+Nc}

  • qv={min((dv1,dv2,dv3)+Nc}/{max(dv1,dv2,dv3)+Nc}

  • dh′=C1*dh,dv′=C2*dv

  • r=(dv1−dh1)/{2*(dh1+dv1)+(*Nc}+0.5

  • p=clip{max(dh′,dv′)/(β*Nc),1}

  • If q=dh′>dv′,qh, and if not,qv.  (5)
  • where min(x, y, z) are the minimum values of x, y and z; max(x, y, z) are the maximum values of x, y and z; and clip (x, a) is the function for limiting x to less than a.
  • α and β in the set of formulae (7) are again the constants as in the first embodiment, and Nc is a constant, which corresponds to the quantity of noise N found using the noise table 105 in the first embodiment. With the instant embodiment wherein the feature quantities p, q and r are figured out, it is possible to make estimation of direction and structure with correction of the influence of MTF deterioration caused by the optical LPF 301. At the weighting coefficient determination block 325, the correction coefficients C3 to C6 for the type corresponding to the optical LPF 301 are again read out of the optical LPF information ROM 302, and constants C1′ and C2′ indicative of to what degree the weight used at the weighting block 126 must be corrected so as to make compensation for MTF deterioration depending on the optical LPF 301 are figured out according to the following set of formulae (8).

  • C1′=q*C3+(1−q)*C5

  • C2′=q*C4+(1−q)*C6  (8)
  • In the set of formulae (8), C3 and C4 are the appropriate coefficients for horizontal and vertical band deteriorations when there is an edge in the neighborhood of the pixel of interest, and C5 and C6 are the appropriate correction coefficients for horizontal and vertical band deteriorations when there is a stripe in the neighborhood of the pixel of interest. And at the weighting coefficient determination block 325, the weighting coefficient figured out according to the set of formulae (3) is further corrected according to the following set of formulae (9) to send W1′ and W2′ out to the weighting block 126.

  • W1′=W1*C1′+W2*C2′

  • W2′=W1*C2′+W2*C1  (9)
  • By computation according to the set of formulae (9), it is possible to determine the weight in such a way as to allow the final edge enhancement characteristics to make compensation for the MTF deterioration caused by the optical LPF 301, and be the best suited for the direction and structure of the edge in the neighborhood of the pixel of interest as well.
  • While the embodiment here is described as applied to a digital camera, it is contemplated that similar processing may run on software, too. More specifically, in the flowchart of the RAW development software in the first embodiment shown in FIG. 9, the edge computation step S2 is changed. FIG. 19 is a flowchart of the embodiment here. In this flowchart, the flowchart for the edge component computation in the first embodiment shown in FIG. 9 is partly removed and corrected. Like numerals in FIG. 9 are intended to indicate like steps, and different numerals are intended to indicate corrected steps. Only the corrected steps are now explained. At step 33, the step 13 of FIG. 9 is implemented with the changing of the set of formulae (2) to the set of formulae (7), and the correction coefficients C1 and C2 needed here—for the type of the optical LPF used for taking the RAW data—are recorded in the header of the RAW data. At step 35, the step 15 of FIG. 9 is implemented with the changing of the set of formulae (3) to the sets of formulae (8) and (9), and the correction coefficients C3 to C6 needed here are again supposed to be recorded in the head of the RAW data. The flowcharts relating to the first embodiment of FIG. 1, the second embodiment of FIG. 15 and the third embodiment of FIG. 19 correspond to the image processing program recited in claim 21.
  • INDUSTRIAL APPLICABILITY
  • According to the invention as described above, it is possible to provide an image processor and image processing program which, with ISO sensitivity and the state of the optical system in mind, applies the optimum band correction to a subject.

Claims (23)

1. An image processor adapted to correct the spatial frequency band of an input image, characterized by comprising a plurality of band correction means having mutually distinct band correction characteristics, a feature quantity computation means adapted to figure out a feature quantity in the neighborhood of each pixel of the input image, and a synthesizing means adapted to synthesize the outputs of said plurality of band corrections means on the basis of said feature quantity.
2. The imaging processor according to claim 1, characterized in that said synthesis means is operable to figure out a weight for each of said band correction means on the basis of said feature quantity, and produce the result of weighting by adding said weight to the result of band correction by each of said band correction means.
3. The image processor according to claim 1, characterized in that said feature quantity computation means is operable to figure out the direction of an edge in said neighborhood as said given feature quantity.
4. The image processor according to claim 1, characterized in that said feature quantity computation means is operable to figure out the probability of said neighborhood belonging to a given image class as said given feature quantity.
5. The image processor according to claim 3, characterized in that said feature quantity computation means is operable to further figure out the reliability of the result of computation of the direction of said edge as said given feature quantity.
6. The image processor according to claim 4, characterized in that said given image class includes any one of an edge portion, a stripe portion, and a texture portion.
7. The image processor according to claim 1, characterized in that said feature quantity computation means is operable to figure out said feature quantity on the basis of the characteristics of the imaging system when said input image is taken.
8. The image processor according to claim 1, characterized in that said synthesis means is operable to implement synthesis on the basis of the characteristics of an imaging system when said input image is taken.
9. The image processor according to claim 7, characterized in that said characteristics of the imaging system are noise characteristics that provide a relation of noise quantity vs. pixel value.
10. The image processor according to claim 7, characterized in that said characteristics of the imaging system are information about the type and position of a pixel deficiency.
11. The image processor according to claim 7, characterized in that said characteristics of the imaging system are a sensitivity difference between pixels at which the same type color information is obtained.
12. The image processor according to claim 7, characterized in that said characteristics of the imaging system are the spatial frequency characteristics of the optical system.
13. The image processor according to claim 12, characterized in that said characteristics of the imaging system are the spatial frequency characteristics of an optical LPF.
14. The image processor according to claim 9 characterized in that said feature quantity computation means is operable to lower the precision with which said direction of the edge is figured out as said noise quantity grows large.
15. The image processor according to claim 9, characterized in that said feature quantity computation means is operable to lower the reliability of said direction of the edge as said noise quantity grows large.
16. The image processor according to claim 9, characterized in that said synthesis means is operable to determine said weight such that the more said noise quantity, the more likely the band correction characteristics of said weighted addition is to grow isotropic.
17. The image processor according to claim 9, characterized in that said synthesis means is operable to determine said weight such that the band correction characteristics of said result of weighed addition become small in a direction orthogonal to a direction along which there are successive pixel deficiencies.
18. The image processor according to claim 1, characterized in that there are two band corrections means each of which is a two-dimensional linear filter having a coefficient of point symmetry.
19. The image processor according to claim 1, characterized in that one of said filters is such that the band correction characteristics in a particular direction have a negative value.
20. The image processor according to claim 1, characterized in that said band correction means is operable to apply given tone transform to said input image, and then implement band correction, and said feature quantity computation means figures out said feature quantity with none of said given tone transform.
21. An image processing program to correct image data for a spatial frequency band, which let a computer implement steps of reading image data, implementing a plurality of band corrections having mutually distinct band correction characteristics, figuring out a given feature quantity in the neighborhood of each pixel of said image data, and synthesizing the outputs of said plurality of band corrections on the basis of said feature quantity.
22. The image processor according to claim 8, characterized in that said characteristics of the imaging system are information about the type and position of a pixel deficiency.
23. The image processor according to claim 8, characterized in that said characteristics of the imaging system are sensitivity difference between pixels at which the same type color information is obtained.
US12/032,098 2005-09-01 2008-02-15 Image processor and image processing program to correct a spatial frequency band of an image Expired - Fee Related US8184174B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2005-253114 2005-09-01
JP2005253114A JP4700445B2 (en) 2005-09-01 2005-09-01 Image processing apparatus and image processing program
PCT/JP2006/317386 WO2007026899A1 (en) 2005-09-01 2006-08-28 Image processing device and image processing program

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/317386 Continuation WO2007026899A1 (en) 2005-09-01 2006-08-28 Image processing device and image processing program

Publications (3)

Publication Number Publication Date
US20080143881A1 true US20080143881A1 (en) 2008-06-19
US20110211126A9 US20110211126A9 (en) 2011-09-01
US8184174B2 US8184174B2 (en) 2012-05-22

Family

ID=37808983

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/032,098 Expired - Fee Related US8184174B2 (en) 2005-09-01 2008-02-15 Image processor and image processing program to correct a spatial frequency band of an image

Country Status (4)

Country Link
US (1) US8184174B2 (en)
JP (1) JP4700445B2 (en)
CN (1) CN101223551B (en)
WO (1) WO2007026899A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070153097A1 (en) * 2006-01-05 2007-07-05 Olympus Corporation Image acquisition apparatus
US20110134292A1 (en) * 2009-12-04 2011-06-09 Canon Kabushiki Kaisha Image processing apparatus
US20110188574A1 (en) * 2008-10-22 2011-08-04 Nippon Telegraph And Telephone Corporation Deblocking method, deblocking apparatus, deblocking program and computer-readable recording medium recorded with the program
US20110206293A1 (en) * 2010-02-19 2011-08-25 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and computer readable medium storing program thereof
US20110222935A1 (en) * 2010-03-12 2011-09-15 Fuji Xerox Co., Ltd. Fixing device and image forming apparatus using the same
US20110261236A1 (en) * 2010-04-21 2011-10-27 Nobuhiko Tamura Image processing apparatus, method, and recording medium
US20140139706A1 (en) * 2012-11-22 2014-05-22 Samsung Electronics Co., Ltd. Image signal processor and mobile device including image signal processor
US9053552B2 (en) 2012-10-02 2015-06-09 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method and non-transitory computer readable medium
CN105308960A (en) * 2013-05-30 2016-02-03 苹果公司 Adaptive color space transform coding
US20160093067A1 (en) * 2014-09-30 2016-03-31 Fujifilm Corporation Medical image processing device and method for operating the same
US10580121B2 (en) * 2017-11-16 2020-03-03 Axis Ab Image noise reduction based on a modulation transfer function of a camera dome
US10602143B2 (en) 2013-05-30 2020-03-24 Apple Inc. Adaptive color space transform coding
US20220070381A1 (en) * 2017-09-08 2022-03-03 Sony Group Corporation Image processing apparatus, image processing method, and image processing program
US11792534B2 (en) 2018-08-30 2023-10-17 Sony Corporation Signal processing device, signal processing method, and image capture device

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4700445B2 (en) * 2005-09-01 2011-06-15 オリンパス株式会社 Image processing apparatus and image processing program
JP4600424B2 (en) 2007-05-08 2010-12-15 セイコーエプソン株式会社 Development processing apparatus for undeveloped image data, development processing method, and computer program for development processing
US8837849B2 (en) * 2007-06-26 2014-09-16 Google Inc. Method for noise-robust color changes in digital images
JP5022923B2 (en) * 2008-01-23 2012-09-12 富士フイルム株式会社 Imaging device and method for correcting captured image signal thereof
JP5129685B2 (en) * 2008-08-06 2013-01-30 キヤノン株式会社 Luminance signal generation apparatus, luminance signal generation method, and imaging apparatus
JP5438579B2 (en) * 2010-03-29 2014-03-12 キヤノン株式会社 Image processing apparatus and control method thereof
JP5510105B2 (en) * 2010-06-21 2014-06-04 株式会社ニコン Digital camera
JP5868046B2 (en) * 2010-07-13 2016-02-24 キヤノン株式会社 Luminance signal creation device, imaging device, brightness signal creation method, program, and recording medium
WO2013047284A1 (en) * 2011-09-26 2013-04-04 シャープ株式会社 Image processing device, image processing method, image processing program, and recording medium storing image processing program
JP2013218281A (en) * 2012-03-16 2013-10-24 Seiko Epson Corp Display system, display program, and display method
JP6423625B2 (en) * 2014-06-18 2018-11-14 キヤノン株式会社 Image processing apparatus and image processing method
JP6632288B2 (en) * 2014-12-12 2020-01-22 キヤノン株式会社 Information processing apparatus, information processing method, and program
CN107730547B (en) * 2017-11-17 2023-05-23 宁波舜宇光电信息有限公司 Control device based on defocusing curve state detection and system comprising same

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5760922A (en) * 1993-10-08 1998-06-02 Matsushita Electric Industrial Co., Ltd. Area recognizing device and gradation level converting device employing area recognizing device
US20020006230A1 (en) * 2000-04-17 2002-01-17 Jun Enomoto Image processing method and image processing apparatus
US20040169747A1 (en) * 2003-01-14 2004-09-02 Sony Corporation Image processing apparatus and method, recording medium, and program
US7554583B2 (en) * 2004-11-04 2009-06-30 Mitsubishi Denki Kabushiki Kaisha Pixel signal processor and pixel signal processing method
US7583303B2 (en) * 2005-01-31 2009-09-01 Sony Corporation Imaging device element
US7719575B2 (en) * 2003-12-22 2010-05-18 Mitsubishi Denki Kabushiki Kaisha Pixel signal processing apparatus and pixel signal processing method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3769004B2 (en) * 1993-10-08 2006-04-19 松下電器産業株式会社 Gradation conversion processing device
JP2858530B2 (en) 1993-12-27 1999-02-17 日本電気株式会社 Edge enhancement device
CN1271567C (en) * 2000-11-30 2006-08-23 佳能株式会社 Image processing device, image processing method, storage medium and program
JP2002344743A (en) 2001-05-11 2002-11-29 Ricoh Co Ltd Image processing unit and image processing method
JP2004112728A (en) * 2002-09-20 2004-04-08 Ricoh Co Ltd Image processing apparatus
JP4104475B2 (en) * 2003-03-18 2008-06-18 シャープ株式会社 Contour correction device
JP4700445B2 (en) * 2005-09-01 2011-06-15 オリンパス株式会社 Image processing apparatus and image processing program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5760922A (en) * 1993-10-08 1998-06-02 Matsushita Electric Industrial Co., Ltd. Area recognizing device and gradation level converting device employing area recognizing device
US20020006230A1 (en) * 2000-04-17 2002-01-17 Jun Enomoto Image processing method and image processing apparatus
US20040169747A1 (en) * 2003-01-14 2004-09-02 Sony Corporation Image processing apparatus and method, recording medium, and program
US7719575B2 (en) * 2003-12-22 2010-05-18 Mitsubishi Denki Kabushiki Kaisha Pixel signal processing apparatus and pixel signal processing method
US7554583B2 (en) * 2004-11-04 2009-06-30 Mitsubishi Denki Kabushiki Kaisha Pixel signal processor and pixel signal processing method
US7583303B2 (en) * 2005-01-31 2009-09-01 Sony Corporation Imaging device element

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7710469B2 (en) * 2006-01-05 2010-05-04 Olympus Corporation Image acquisition apparatus
US20070153097A1 (en) * 2006-01-05 2007-07-05 Olympus Corporation Image acquisition apparatus
US20110188574A1 (en) * 2008-10-22 2011-08-04 Nippon Telegraph And Telephone Corporation Deblocking method, deblocking apparatus, deblocking program and computer-readable recording medium recorded with the program
US20110134292A1 (en) * 2009-12-04 2011-06-09 Canon Kabushiki Kaisha Image processing apparatus
US8508625B2 (en) * 2009-12-04 2013-08-13 Canon Kabushiki Kaisha Image processing apparatus
KR101391161B1 (en) 2009-12-04 2014-05-07 캐논 가부시끼가이샤 Image processing device
KR101368744B1 (en) * 2010-02-19 2014-02-28 후지제롯쿠스 가부시끼가이샤 Image processing apparatus and computer readable recording medium recording image processing program
US20110206293A1 (en) * 2010-02-19 2011-08-25 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and computer readable medium storing program thereof
US20110222935A1 (en) * 2010-03-12 2011-09-15 Fuji Xerox Co., Ltd. Fixing device and image forming apparatus using the same
US8483604B2 (en) 2010-03-12 2013-07-09 Fuji Xerox Co., Ltd. Fixing device and image forming apparatus using the same
US20110261236A1 (en) * 2010-04-21 2011-10-27 Nobuhiko Tamura Image processing apparatus, method, and recording medium
US8629917B2 (en) * 2010-04-21 2014-01-14 Canon Kabushiki Kaisha Image processing apparatus, method, and recording medium
US9053552B2 (en) 2012-10-02 2015-06-09 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method and non-transitory computer readable medium
US20140139706A1 (en) * 2012-11-22 2014-05-22 Samsung Electronics Co., Ltd. Image signal processor and mobile device including image signal processor
US9479744B2 (en) * 2012-11-22 2016-10-25 Samsung Electronics Co., Ltd. Image signal processor and mobile device including image signal processor
US11368688B2 (en) 2013-05-30 2022-06-21 Apple Inc. Adaptive color space transform coding
US10602143B2 (en) 2013-05-30 2020-03-24 Apple Inc. Adaptive color space transform coding
US11184613B2 (en) 2013-05-30 2021-11-23 Apple Inc. Adaptive color space transform coding
US11368689B2 (en) 2013-05-30 2022-06-21 Apple Inc. Adaptive color space transform coding
CN105308960A (en) * 2013-05-30 2016-02-03 苹果公司 Adaptive color space transform coding
US11758135B2 (en) 2013-05-30 2023-09-12 Apple Inc. Adaptive color space transform coding
US20160093067A1 (en) * 2014-09-30 2016-03-31 Fujifilm Corporation Medical image processing device and method for operating the same
US9600903B2 (en) * 2014-09-30 2017-03-21 Fujifilm Corporation Medical image processing device and method for operating the same
US20220070381A1 (en) * 2017-09-08 2022-03-03 Sony Group Corporation Image processing apparatus, image processing method, and image processing program
US11778325B2 (en) * 2017-09-08 2023-10-03 Sony Group Corporation Image processing apparatus, image processing method, and image processing program
US10580121B2 (en) * 2017-11-16 2020-03-03 Axis Ab Image noise reduction based on a modulation transfer function of a camera dome
US11792534B2 (en) 2018-08-30 2023-10-17 Sony Corporation Signal processing device, signal processing method, and image capture device

Also Published As

Publication number Publication date
JP4700445B2 (en) 2011-06-15
CN101223551B (en) 2012-11-21
CN101223551A (en) 2008-07-16
US20110211126A9 (en) 2011-09-01
WO2007026899A1 (en) 2007-03-08
US8184174B2 (en) 2012-05-22
JP2007066138A (en) 2007-03-15

Similar Documents

Publication Publication Date Title
US8184174B2 (en) Image processor and image processing program to correct a spatial frequency band of an image
US7760252B2 (en) Shading compensation device, shading compensation value calculation device and imaging device
US7454057B2 (en) Image processing apparatus and image processing program
US8300120B2 (en) Image processing apparatus and method of processing image for reducing noise of the image
US8605164B2 (en) Image processing apparatus, control method therefor, and storage medium
US7945091B2 (en) Image processor correcting color misregistration, image processing program, image processing method, and electronic camera
EP1528793B1 (en) Image processing apparatus, image-taking system and image processing method
US8259202B2 (en) Image processing device and image processing program for acquiring specific color component based on sensitivity difference between pixels of an imaging device
WO2004110071A1 (en) Image processor and image processing program
US7346210B2 (en) Image processing device and image processing program for determining similarity factors of pixels
JP5917048B2 (en) Image processing apparatus, image processing method, and program
JP6282123B2 (en) Image processing apparatus, image processing method, and program
US8098308B2 (en) Image processing device and computer-readable storage medium
US8184183B2 (en) Image processing apparatus, image processing method and program with direction-dependent smoothing based on determined edge directions
JP4945943B2 (en) Image processing device
US8559711B2 (en) Method for correcting chromatic aberration
JP2007028040A (en) Image processing apparatus
US8559762B2 (en) Image processing method and apparatus for interpolating defective pixels
US8817137B2 (en) Image processing device, storage medium storing image processing program, and electronic camera
US20100215267A1 (en) Method and Apparatus for Spatial Noise Adaptive Filtering for Digital Image and Video Capture Systems
US6747698B2 (en) Image interpolating device
JP4797478B2 (en) Image processing device
JP2004007165A (en) Image processing method, image processing program, and image processor

Legal Events

Date Code Title Description
AS Assignment

Owner name: OLYMPUS CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TSUKIOKA, TAKETO;REEL/FRAME:020524/0908

Effective date: 20071127

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: OLYMPUS CORPORATION, JAPAN

Free format text: CHANGE OF ADDRESS;ASSIGNOR:OLYMPUS CORPORATION;REEL/FRAME:039344/0502

Effective date: 20160401

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY