WO2009081709A1 - Appareil de traitement d'image, procédé de traitement d'image et programme de traitement d'image - Google Patents

Appareil de traitement d'image, procédé de traitement d'image et programme de traitement d'image Download PDF

Info

Publication number
WO2009081709A1
WO2009081709A1 PCT/JP2008/071996 JP2008071996W WO2009081709A1 WO 2009081709 A1 WO2009081709 A1 WO 2009081709A1 JP 2008071996 W JP2008071996 W JP 2008071996W WO 2009081709 A1 WO2009081709 A1 WO 2009081709A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
component
noise
component image
edge
Prior art date
Application number
PCT/JP2008/071996
Other languages
English (en)
Japanese (ja)
Inventor
Hideya Aragaki
Original Assignee
Olympus Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corporation filed Critical Olympus Corporation
Publication of WO2009081709A1 publication Critical patent/WO2009081709A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Definitions

  • the present invention relates to, and in particular, to a holding device that eliminates image noise. Background leakage
  • a noise removal device that removes a noise signal contained in an image signal is widely used by a mouth-pass filter.
  • the noise is reduced by the smoothing key by the low-pass filter, the edge component included in the image signal is also smoothed, and the image power s is reduced.
  • JP55-133179 ⁇ addresses this problem by detecting the direction of the edge component contained in the image and applying the logic along the direction without reducing the sharpness of the image. «Doing noise; ⁇ is shown.
  • JP2001-57677 A discloses noise removal; ⁇ using multi-resolution conversion.
  • this noise elimination method first, the image signal is divided into signals in a plurality of frequency bands using a Laplacian pyramid ⁇ H or wavelet transform technique. Then, after applying the filter key having the direction as described in JP55-133179A to the divided signals in each band, they are recombined.
  • This has the advantage that it is possible to remove the noise of the bow girl suitable for each of the divided bands while suppressing the conversion of the edge component to! JP2001-57677 A also uses the iS3 ⁇ 4 side image in the multiple H ⁇ g transform to determine the direction of the image used for direction determination; ⁇ or the result of the direction determination using the low side image.
  • the combination of the results of direction discrimination using the image of the band of interest is shown: ⁇ :. Disclosure of investigation
  • JP55-133179A noise is generated based on the result of direction determination using the image signal, so that it is strongly influenced by the noise included in the image signal. For this reason, it is difficult to obtain an accurate direction discrimination result, and there is a problem that as a result of noise, the weak force ⁇ ffliit is crushed.
  • JP2001-57677 A is designed to improve the direction discrimination accuracy by combining the direction discrimination result using the image on the fiber side and the direction discrimination result using the image of the band of interest ⁇ ";
  • the images on the ⁇ side in many a ⁇ f ⁇ do not include etsu or fine crane structures, it is difficult to obtain accurate direction determination results.
  • the direction discrimination for the image in the band of interest is strongly affected by noise, it is difficult to obtain an accurate direction discrimination result, and the direction discrimination processing needs to be calculated twice. Therefore, it is not desirable from the parents of.
  • the present invention has been made in view of the above problems, and maintains the Etsu and ⁇ t structure by performing highly accurate edge direction discrimination based on the skeletal components corresponding to the typical edge structure extracted from the image signal. And d.
  • the image transfer apparatus is a first component that is a skeleton component that indicates the center of an image including a flat region divided by the edge of the image and the edge of the image.
  • the second component image calculated based on the edited image signal and the first component image, the image ⁇ part to be an image representing multiple components, and the first component image
  • a direction discriminating unit that discriminates the direction of the edge component, and a noise that observes noise in accordance with the direction of the edge component of the self-second component image are provided.
  • the thumbtack position according to another sickle of the present invention includes a first component image, which is a skeleton component made up of a flat component extracted from a part other than the image edge and the edge of the image, For the second component image that is calculated based on the image signal and the first component image, the image sequence that is connected to the image that represents multiple components, including the first component image, A direction discriminating unit that discriminates the direction of the edge component, and a noise ⁇ g »that detects noise according to the direction of the tiHB edge component for the second component image, are determined.
  • the image signal is a skeletal component indicating a general structure of an image including a flat region divided by an image edge and an edge of the image.
  • the noise is determined according to the direction of the edge of the printing edge component for the second printing image.
  • An image program includes a first component image, which is a skeleton component indicating a global structure of an image including a flat region divided by an image edge and a tmiB edge. , Rnm t mm ⁇
  • the second component image calculated based on the component image, a step suitable for an image representing a plurality of components including, and the direction of the edge component relative to the ffl!
  • S first component image _Bl is calculated by making the computer purple for the step and the step for generating noise according to the direction of the self edge component for the tins second component image.
  • FIG. 1 is a configuration diagram of a device according to the first embodiment.
  • FIG. 2A, 2B, and 2C are diagrams showing examples of the Mi image signal I, the first component image U, and the second component image V, respectively, and FIG. 2D, FIG. 2E, and FIG. FIG. 5 is a diagram showing an example of a direct destruction component, a harmonic component, and a frequency component obtained by performing separation based on the frequency component of FIG.
  • FIG. 3 is a diagram for explaining the direction discriminating method.
  • FIG. 4 is a block diagram of noise in the first state.
  • FIG. 5A, FIG. 5B, and FIG. 5C are diagrams for explaining the coring process.
  • FIG. 6 is a flowchart corresponding to the key from the image ⁇ ? Part to the image compositing part in the first ⁇ ? State.
  • FIG. 7 is a configuration diagram of the imaging according to the second actual state.
  • 8A, 8B, and 8C are diagrams for explaining the direction discriminating method.
  • 9A, 9B, and 9C are diagrams for explaining the noise model.
  • FIG. 10 is a block diagram of the noise iS in the second state.
  • FIG. 11 is a flowchart corresponding to the processing from the image collar part to the image composition part in the second embodiment.
  • FIG. 12 is a configuration diagram of a device according to the third mode.
  • FIG. 13B is a diagram showing an example in which there is no correlation between color components
  • FIG. 13B is a diagram showing an example in which there is a correlation between machine components.
  • FIG. 14 is a configuration diagram of the noise iS «in the third mode.
  • Figure 15 shows the image in the third 3 ⁇ 4 »state ⁇ ! 5 is a flowchart corresponding to keys from the image to the image composition unit. Best mode for carrying out the invention
  • FIG. 1 is a system configuration diagram of an imaging according to the first embodiment of the present invention.
  • This arrangement consists of an optical system 101, a solid-state image sensor 10 02, an AZD conversion unit 10 3 (hereinafter referred to as A / D 10 3), a signal key unit 10 04, an image unit 10 05, and an extraction unit 4 0 0, Direction judgment further U part 1 0 6, Noise
  • the solid-state imaging device 1 0 2 is a signal processing unit 1 0 4 ⁇ via AZD 1 0 3.
  • the signal section 1 0 4 is displayed as the image section 1 0 5 ⁇ .
  • the image continuous section 105 is an extraction section 400 and an image composition section 10.
  • Extraction unit 4 0 0 is a direction determination additional IJ unit 1 0
  • the direction discriminator 1 0 6 has noise ⁇ g »1 0 7 ⁇ .
  • Noise 0 7 is the image composition part 1 0 8 ⁇ .
  • the unit is bi-directionally connected to the system controller 100, and the operation is controlled by the system controller 100.
  • dislike is data communication
  • is maintenance. Represents a line.
  • Figure 1 illustrates 3 ⁇ 4fSx of the process.
  • the fixed element 10 0 2 Based on the control of the system controller 100, the fixed element 10 0 2 outputs an optical image formed on the surface of the femoral image element 1 0 2 through the optical system 1 0 1 as an analog 15 image signal. This analog image signal is sent to AZD 100.
  • the thigh image element 102 is a color image sensor having a color filter array arranged on the front surface.
  • the solid element 0 2 may be many;
  • the analog IS image signal that has been converted to A / D 1 0 3 ⁇ 3 ⁇ 4 is converted to the digital No. W code, and then converted into a predetermined color image signal (hereinafter referred to as the original image signal I) by the signal processing unit 10 4.
  • the image signal I consists of R, G, and B color signals. Subsequent checks are performed independently of color beliefs.
  • the converted application image signal I is sent to the image collar portion 105.
  • the image continuous unit 105 transmits the image signal I to the first component image U and the second component image V according to the color confidence.
  • the first component image U is a skeletal component viewable image structure of the original image signal I that includes a flat ⁇ bandage (a component that changes slightly) and edge components.
  • This first component image U is derived from "a component indicating the structure of an image including the edge included in the image signal I and the smooth flat region of the image change divided by the edge", or the image signal I. It is defined as “a component obtained by removing a fine structural component such as a texture (hereinafter referred to as a texture component)”.
  • the first component image U is obtained from the average sample extracted from the edge of the image and the part other than the edge. Also expressed as a skeleton component.
  • the second component image V is a component calculated based on the original image signal I and the first component image U.
  • the More specifically, the second component image V is a part of the first component image U that includes the texture component and noise and is equivalent to the image signal I.
  • Fig. 2B and Fig. 2C show the first component image U and the second component image V, respectively, which are separated into components of the original image signal I shown in Fig. 2A.
  • the signal is displayed as a one-dimensional signal.
  • the edge that represents a small image included in the image signal I is included in the first component image U, and the anomaly component that is awkward is included in the second component image V. include.
  • the image signal I is described as being connected to two components, but it is possible to reduce the number to three or more components.
  • the extraction unit 400 is a rectangular area of a predetermined size centered on the pixel of interest from the first component image U generated by the image ⁇ unit 1 0 5, for example, in this embodiment, a pixel block of 5 ⁇ 5 pixels (hereinafter referred to as a pixel block) , Pixel block of the first component image U) is extracted. Also from the second component image V, a pixel block (hereinafter referred to as a pixel block of the second component image V) corresponding to the pixel block region of the first component image U is extracted with the target pixel at the center. The size of this area can be changed to other sizes as required.
  • the pixel block of the first component image U extracted by the extraction unit 400 is subjected to a direction determination unit 10 6, and the pixel block of the second component image V is subjected to noise 1 0 7 ⁇ .
  • the direction discriminator 1 0 6 determines the edge direction at the target pixel position. Is determined.
  • the edge direction discrimination result is noise 0. Details of the direction determination key will be described later.
  • the noise ⁇ 5 «1 0 7 first applies a filter to the pixel block of the second component image V by determining the filter coefficient or the filter process itself based on the edge direction indicated by the edge direction discrimination result. This is performed for all pixels.
  • the second component image after filtering is referred to as v, hereinafter eir.
  • Noise 0 7 is further subjected to coring processing to make the fine / hi symbol zero for the finallet processing ⁇ ′, and noise ⁇ g »a is completed.
  • the second image after coring ⁇ a is called V ".
  • the coring key is a signal obtained by subtracting the «value of the input signal by the threshold value if the input signal value is less than the threshold value, and the signal is defined as a single bit. It seems to be a number. Details of noise will be discussed later.
  • the image composition unit 10 8 obtains the first component image U from the image sequence 1 0 5 force, and the noise and the second component image V ”after processing at a predetermined ratio, for example, a ratio of 1: 1. As a result, a composite image 1 'in which noise is reduced is obtained with respect to the desired image signal I.
  • the composite image is recorded on a memory medium such as a flash memory via JM ⁇ l 0 9. 1 1 0 ⁇ 3 ⁇ 4.
  • Equation (2) X is the pixel position in the horizontal direction of the first component image U, and y is in the vertical direction! ⁇ Position.
  • the relation of the second component image V in Eq. (1) is modeled as the relation G.
  • G between «I-function ⁇ is the space of the function expressed by Equation (3) by the leakage function gl, g2, and its energy is defined as G norm II v
  • Ugly inf) 2 + (g JL; + 2 ⁇
  • the second component image V associated with the application image signal I is affected by noise, but the first component image U is hardly affected by noise and the skeletal component is not dulled.
  • target smoothing may be performed by a median filter, a morphological filter, or the like.
  • the following is an example of a method based on a bounded variation function.
  • Example 1 First component image U is the result of median filter ffl applied to image signal I, and second component image V is the wisteria of first component image U in image signal I;
  • Example 2 Method in which the first component image U is the result of applying a morphological filter to the image signal I, and the second component image V is the image of the first component image U in the i ⁇ li image signal I
  • Example 3 The first component image U is the result of applying the reduction / ⁇ 03 ⁇ 4 to the image signal I and further expanding it, and the second component image V is the result of the first component image U in the image signal I.
  • Example 4 Method of using the first component image U as the result of applying Bilateral Filter to the image signal I and using the second component image V as the rattan of the first component image U in the image signal I
  • the image signal ⁇ is represented by the product of the first component image U and the second component image V. If the counter iW ⁇ H image signal obtained by converting the image signal I to the opposite is f, then the equation (6 ) Can be converted to an additive problem.
  • the image signal I force, the second component image V, which is applied, is affected by noise.
  • the force first component image U is hardly affected by noise. Therefore, it is possible to extract the skeletal component collision ⁇ ] image structure it) without dulling the edges.
  • the image signal obtained by subtracting the corresponding pixel value of the first image U obtained in advance from each pixel value of the image signal I may be used as the second component image V.
  • the direction determination unit 106 calculates a threshold value for determining the edge direction for each direction based on the pixel position of interest.
  • FIG. 3 is a diagram showing the sharpness of determining the direction from the 5 ⁇ 5 pixel block.
  • AO 0 to A 44 indicate each pixel in the pixel block, and the center pixel A 22 corresponds to the target pixel position.
  • the direction of the edge to be determined is the four directions shown in FIG. The number of directions can be increased to 8 directions as necessary.
  • direction 1 force direction 4 is determined, and in order to determine which is the direction along the edge most, the dictation value is calculated for each direction.
  • Various fHffi values are conceivable, but in this embodiment, l ⁇ MilE 1 to E 4 calculated based on the equation (11) are used.
  • E 1 to E 4 are evaluation values corresponding to directions 1 to 4, respectively.
  • E 1
  • E2
  • E3
  • E4
  • the dragon value has an edge that grows larger when the edge is smaller, and smaller when the edge is smaller.
  • the direction discriminating unit 106 detects the straight line among tmi 1 force, et al. E 4 and determines that there is no edge in the direction corresponding to the dragon value. That is, the direction corresponding to the smallest threshold is determined as the direction along the edge.
  • the detected edge direction is set to Noise 107. Next, details of the operation of noise 07 will be described.
  • FIG. 4 is a diagram illustrating an example of the noise 107.
  • the noise 107 includes a direction-specific filter unit 120 and a coring unit 121.
  • the extraction unit 400 and the direction discriminating unit 106 are directional filter units 120 ⁇ .
  • the direction-specific filter unit 120 is a coring unit 12.
  • the coring unit 121 is connected to the image composition unit 108.
  • the pixel block of the second component image V obtained from the extraction unit 400 is input to the direction-specific filter unit 120.
  • the direction-specific filter unit 120 determines a filter coefficient based on the edge direction judgment result from the direction determination unit 106, that is, the edge direction, and performs a filtering process based on the determined filter coefficient.
  • the filter coefficient is designed (weighted) so that the pixel positioned in the edge direction with respect to the target pixel greatly contributes to the filter effect.
  • the finale ⁇ ⁇ 3 ⁇ 43 ⁇ 4 is the coring part 121 ⁇ .
  • the coring part 121 is The correlating process is performed on the Noleta result to make the signal fine / J and zero.
  • 5A to 5C show an example of processing in the coring unit 1 2 1.
  • the reference value is B
  • the signal A before key and C0 after processing is B
  • the word “D” is a signal ⁇ 0® represented by the equation (1 2) and the graph in Fig. 5A.
  • a ⁇ B + T2 ⁇ D A-T2-B (1 2) where T l and ⁇ 2 are the parameters that determine the upper and lower thresholds, respectively.
  • T l and ⁇ 2 are the parameters that determine the upper and lower thresholds, respectively.
  • Fig. 5B and Fig. 5C show that the coring force S is performed.
  • Fig. 5 B [ ⁇ su One-dimensional signal
  • Fig. 6 shows the flow of software processing from the image decomposition unit 105 to the image composition unit 10 8.
  • the program used in software processing is, for example, For example, store it in the computer storage.
  • C PU computer reads the storage device program from, 3 ⁇ 4ItT3 ⁇ 4 0
  • KiHanare ⁇ for example, a magneto-optical disk, CD- ROM, DVD-ROM, a body memory. It is also possible to distribute a program to a computer via a communication line, and the computer receiving this distribution can HfrT the program.
  • step S O 1 the image signal I is suitable for the first component image U and the second component image V.
  • step S 0 2 the edge direction is calculated based on the first component image U, and the edge direction is determined based on the evaluation value.
  • step S 0 direction-specific filtering is performed on the second component image V based on the edge direction determined in step S 0 2.
  • step S O 4 coring processing is performed on the second component image V, after filtering by direction.
  • step S O 5 the first component image U and the second component images V,, after keying in step S O 4 are synthesized to obtain a synthesized image.
  • the image signal is divided into the first component image U which is a skeleton component and the second component image V obtained from the wisteria of the first component image U which is similar to the original image signal.
  • the direction of the edge component is determined with respect to the first component image U, and the noise is input according to the direction of the edge component with respect to the second component image V. .
  • the direction of the wedge component based on the first component image U including the skeletal component, it becomes possible to accurately determine the direction without being affected by noise.
  • noise by applying noise to the second component image V containing noise according to the direction of the edge component, it is possible to make noise ⁇ !
  • the pixel block of the first component image U and the pixel block of the second component image V corresponding to the pixel block of the first component image U are extracted.
  • the edge component direction and the pixel position on the second component image V corresponding to the target pixel position Based on the pixel values of the pixel group located in the orthogonal direction, the fine wave component in the region is calculated.
  • noise When noise is applied to the two-component image V, by calculating the wave component considering the direction of the edge component, noise ⁇ g »ia with an edge component added can be obtained.
  • the edge direction discrimination accuracy from the first component image U can be improved.
  • the image edge portion 105 calculates a total variation norm with respect to the image signal I and minimizes the calculated total variation norm to thereby obtain a first component including an edge component and a flat component of the image signal I force. Since the one-component image U is extracted, the first component image U can be extracted with high accuracy.
  • the original image signal I is composed of a plurality of machine parts, and the image connection part 1 0 5, direction discriminating part 1 0 6, and noise 0 7 are arranged in order of the decoration of the image signal I. It is possible to reverse the noise.
  • FIG. 7 is a system configuration diagram of the imaging apparatus according to the second embodiment of the present invention.
  • the direction discriminating unit 1 0 6 force S direction discriminating unit 2 3 4 is noisy, ⁇ 3 ⁇ 4 « 1 0 7 is noisy, ⁇ 5 ⁇ 2 0 1 Replaced with no.
  • the parameter configuration 2 0 0 is i ⁇ .
  • the image ⁇ 1 part 1 0 5 includes an image composition part 1 0 8, a parameter setting 2 0 0, and an extraction part 4 0 0 ⁇ .
  • the extraction unit 400 is obscured by the direction discrimination unit 2 3 4 and noise 2 0 1.
  • the direction discriminating unit 2 3 4 and the parameter setting 2 0 0 are noise 2 0.
  • Noise ⁇ ® »2 0 1 is the image composition part 1 0 8 ⁇ .
  • the upper threshold value and the lower threshold value in the coring process are set at a predetermined fixed value.
  • the signal level of the first component image U is set. Based on this, the amount of noise in the second component image V is estimated, and the upper limit threshold and the lower limit threshold are set based on the noise amount. With such a configuration, it is possible to control the effect of the coring process by the amount of noise.
  • the extraction unit 4 0 0 extracts the pixel block of the first component image U and the pixel block of the second component image V.
  • the pixel block of the first component image U extracted by the extraction unit 400 is sent to the direction discriminating unit 23, and the pixel block of the second component image V is sent to the direction discriminating unit 2 34 and noise ⁇ 3 ⁇ 43 ⁇ 43 ⁇ 4 2 0 1 ⁇
  • the direction discriminating unit 2 3 4 discriminates the edge direction based on the pixel block of the first component image U and the second component image V.
  • the edge direction discrimination result is noise ⁇ g «2 0 1 ⁇ .
  • the parameter setting ⁇ p 200 is obtained by first obtaining the signal level m of the first component image U from the image section 05, and then using the noise amount model set in advance,
  • the amount of noise in the second component image V corresponding to the U signal level is estimated for each pixel. The details of the estimation of the amount of noise will be delayed.
  • parameters Tl, ⁇ 2 are set based on the amount of noise ⁇ .
  • the parameters Tl and ⁇ 2 are set to values proportional to the noise amount ⁇ by, for example, Equation (13).
  • Equation (1 3) k is a predetermined coefficient, for example, 1/2.
  • Noise iS 3 ⁇ 4 3 ⁇ 4 201 is the same as in the first 3 ⁇ 4 6 state; first, based on the edge direction indicated by the edge direction determination result obtained from the direction determination unit 234 for the pixel block of the second component image V by ⁇ Determine the filter coefficient and apply the filter key.
  • the direction in the first » ⁇ state is similar to the operation of the determination unit 106, except that the pixel values at the corresponding pixel positions of the pixel block of the first component image U and the pixel block of the second component image V are predetermined. The difference is that the direction is discriminated based on the reference image I dir generated by weighted averaging according to the ratio. This point will be described with reference to FIG.
  • the first component as shown in FIG. Obtain the pixel block of the image U and the pixel block of the second component image V shown in Fig. 8B (see Fig. 8A.
  • a 0 0 to A4 4 indicate each pixel in the pixel block, and the center
  • the pixel A 2 2 corresponds to the target pixel position
  • B 0 0 to B 44 indicate each pixel in the pixel block
  • the center pixel B 2 2 corresponds to the target pixel position.
  • a new pixel value is calculated from, and the pixel block of the reference image I dir is subdivided to determine the direction.
  • Each pixel of C 00 to C 44 in FIG. 8C is a pixel of the reference image I dir to be flfled for direction discrimination.
  • Each pixel C 0 0 to C 4 4 includes pixels A 0 0 to A 4 4 included in the pixel block of the first component image and pixels BO 0 to B 4 4 included in the pixel block of the second component image. From the above, it is constructed by weighted addition such as equation (1 4); 1 ⁇ 1 ⁇ .
  • wij is a weight of about 0 to 0.5.
  • the first component image U-force S is given priority, so it is resistant to noise and the direction is determined based on the ⁇ -like character, ⁇ structure.
  • wij can have a different value depending on the pixel position in the pixel block.
  • a configuration in which wij can be input from the outside to the direction discriminating unit can be configured so that wij can be arbitrarily set.
  • direction determination is performed by the same method as in the first exemplary embodiment, and the direction determination result is set to noise.
  • a signal level-one noise carrying model (hereinafter referred to as a noise model) force S for the second component image V is recognized.
  • No. Parameter setting 2 0 0 is By referring to the model, the amount of noise corrected in the second component image V is estimated.
  • the noise model will be described below.
  • is a subterm.
  • the amount of noise ⁇ varies not only with the signal level but also with the gain of the element.
  • Fig. 9 ⁇ ⁇ ⁇ ⁇ ⁇ plots the amount of noise ⁇ at 3 ISO ISO sensitivities (gains) 100, 200 and 400 related to the gain at ⁇ t.
  • Each curve has the form shown in Eq. (15), but its coefficient depends on the ISO sensitivity related to the gain. If t is set to t, the gain is set to g, and the noise model is converted to ⁇ : in consideration of the above, it can be expressed by Equation (16).
  • Equation (16) ⁇ ⁇ , ygt i is a constant term determined according to humidity t and gain g. Color! ⁇ Signal ⁇ ⁇ This noise model can be applied independently to each color signal.
  • the signal processing unit 104 performs signal processing after AZD conversion, the above model cannot be used as it is. In other words, considering the difficulty of the noise model, it is necessary to consider the characteristics 14 of the signal section 104.
  • the Knee process that converts a 12-bit input signal into a 8-bit output signal performs a ⁇ transformation: ⁇
  • the signal ⁇ ® unit 104 has input / output reliability as shown in Figure 9B. is doing.
  • L (12) represents the signal level immediately after A and D conversion
  • L (8) represents the signal level after signal processing.
  • the signal level L (8) and the amount of noise are the values in the curve shown in Fig. 9C.
  • the parameter setting ⁇ 2 0 0 is used to obtain the signal level of the 1-component image U from the image ⁇ ? Part 1 0 5 force.
  • the noise amount is obtained by referring to the noise model shown in Fig. 9C. For example, according to Equation (1 3), depending on the amount of noise
  • the parameters Tl and ⁇ 2 are set.
  • 3 ⁇ 4 ⁇ « is the same as the noise ig « 1 07 in the first 3 ⁇ 4 embodiment, and the same configuration and the same name and number are assigned. Only the differences will be described below.
  • noise 10 07 in the first male form is the same as that of the noise 10 07 except that the coring process is controlled based on the parameters T 1 and T 2 obtained from the parameter settings 3 ⁇ 43 ⁇ 420. Different.
  • FIG. 10 is a diagram showing an example of a noise iS »20 1.
  • the noise 2 0 1 has a configuration in which the coring part 1 2 1 of noise 0 7 shown in FIG. 4 is replaced with a coring part 2 2 1.
  • the direction discriminating unit 2 3 4 is a direction-specific filter unit 1 2 0 ⁇ .
  • the parameter setting example 2 0 0 and the direction-specific filter section 1 2 0 are coring sections 2 2 1
  • the coring unit 2 2 1 is the image composition unit 1 0 8 ⁇ .
  • the pixel block of the second component image V output from the extraction unit 400 is input to the direction-specific filter unit 120.
  • the direction-specific filter unit 1 2 0 is the same as the direction discriminator 2 3 4
  • the filter direction is determined based on the edge direction, that is, the edge direction, and the filter processing is performed.
  • the filter result is the coring part 2 2 1 ⁇ 3 ⁇ 4.
  • the coring unit 2 2 1 performs coring so that a minute signal is used as the opening for the filter processing result.
  • the operation is the same as that of the coring portion 1 2 1 in the noise 07 in the first embodiment.
  • the parameters T 1 and T 2 that specify the coring upper limit threshold and the lower limit threshold are obtained from the parameter setting 2 0 0 and used, depending on the estimated noise amount, It is possible to control the noise level of the core link.
  • the coring result is generated by an image composition unit 1 0 8 ⁇ 3 ⁇ 4.
  • the fixed element 102 with a monochrome image pickup element and to configure based on a monochrome image signal. Then, the image signal I becomes a monochromatic mouth signal, and becomes each component u, v «3 ⁇ 43 ⁇ 4 signal transmitted from the J3 ⁇ 4S image signal.
  • Fig. 11 shows the flow of the processing from the image ⁇ 1 part 10 5 to the image composition part 10 8 by software processing.
  • the program used in the software is written in 1t3 ⁇ 4 of the computer's ROM, RAM, etc. and executed by the computer's CPU. The first Then, assign the same step number.
  • step S O 1 the image signal I is converted into a first component image U and a second component image V.
  • step S 1 the edge direction is determined based on the evaluation value.
  • step S11 based on the noise model, the noise amount of the second component image V is estimated for each pixel from the signal level of the first component image U, and parameters Tl and ⁇ 2 are set based on the noise amount. To do.
  • step SO 3 the edge direction determined in step S 1 0 for the second component image V Based on, filter by direction.
  • step S 12 the second component image V ′ after the direction-specific filtering is subjected to the coring ⁇ based on the parameters T 1 and ⁇ 2 set in step S 11.
  • step S 0 5 the first component image U and the second component images V, after being subjected to the coring key in step S 12 are synthesized to obtain a synthesized image ⁇ .
  • the noise parameters ⁇ 1 and ⁇ 2 are set based on the first component image U, and the noise S ⁇ parameter ⁇ 1, ⁇ ⁇ ⁇ ⁇ is set for the second component image V. 2. Perform noise according to the direction of the edge component. Neutral estimated from the first component image U As a result, the degree of noise can be controlled navigationally according to the amount of noise, and more accurate noise iS ⁇ W becomes possible.
  • the pixel block of the first component image U and the pixel block of the second component image V corresponding to the pixel block of the first component image U are extracted, and the target pixel of the pixel blocks of the second component image V is extracted.
  • the extinction amount in the region is calculated based on each pixel value of the pixel group that is aligned in the direction orthogonal to the direction of the edge component.
  • the calculated harmonic component is corrected based on the noise parameters T 1 and T 2.
  • the Sl wave component is calculated considering the direction of the edge component, and the noise parameter T l, ⁇ 2 set based on the noise level By correcting the components, high-precision noise can be achieved.
  • the direction discriminator 2 3 4 discriminates the direction of the edge component that is the target pixel using the pixel block of the first component image U and the pixel block of the second component image V, so that the texture not included in the skeleton component Thus, it is possible to obtain a highly accurate direction discrimination result in consideration of such a small 3 ⁇ 43 ⁇ 43 ⁇ 4 component.
  • the direction discriminating unit 2 3 4 also weights the pixel block of the first component image U and the second component image V. Since the edge component direction is determined using the image signal calculated by averaging, a highly accurate direction that simultaneously considers the skeletal component and minute wrinkle components such as textures not included in the skeletal component A discrimination result can be obtained.
  • the first component image including the skeletal component 3 ⁇ 4r ⁇ , the second component image including the minute viewing component, and the force s direction discrimination accuracy are affected.
  • the degree can be freely changed. This makes it possible to flexibly improve the direction discrimination accuracy.
  • the image signal I is composed of a plurality of components, and the image resolution unit 1 0 5, the direction discriminating unit 2 3 4, and the noise 2 0 1 force S are ordered by ⁇ of the image signal I.
  • the noise iS «2 0 1 is applied to the first component image U and the second component image V that have been generated by the image ⁇ ?
  • the noise is input to the second component image V according to the direction of the noise parameters T1, T2 and the edge component.
  • FIG. 12 is a system configuration diagram of an imaging apparatus according to the third embodiment of the present invention.
  • the composition is the same as the second » ⁇ example, and the same composition is assigned the same name and number. Only the differences will be described below.
  • the configuration of the third embodiment is different from the configuration of the second embodiment shown in FIG. 7 in that the buffer 3 0 1, the correlation calculation unit 3 0 2, the parameter correction unit 3 0 3, and the noise 1ffi3 ⁇ 4
  • the configuration is such that 2 0 1 is replaced with noise low 3 4.
  • Image ⁇ 1 part 1 0 5 is extracted part 4 0 0
  • the extraction unit 400 includes a direction discriminating unit 2 3 4, noise iS «3 4, and a buffer 3 0.
  • the buffer 3 0 1 has a correlation operation unit 3 0 2 ⁇ .
  • the direction discriminator 2 3 4 has noise ffi ⁇ 3 0 4 ⁇ .
  • the parameter setting 2 0 0 and the correlation calculation unit 3 0 2 are a parameter correction unit 3 0 3 ⁇ .
  • the parameter correction unit 3 0 3 is noise, iS3 ⁇ 43 ⁇ 4 3 0 4 ⁇ . Noise, image 3 0 4, is included in the image composition unit 10.
  • the noise amount in the second component image V is estimated based on the first component image U, and the threshold value is set based on the noise amount.
  • the cross-correlation coefficient between stations evaluated in the 3 X 3 near ⁇ ! Region (hereinafter, the correlation is a positive value close to 1.
  • noise 3 ⁇ 4r ⁇ color image In this case, the number of pixels that are close to each other decreases, so the number of close contacts decreases, so it can be said that the smaller the value of the number of awakens, the more noise is included in the image signal. .
  • the color correlation between the color signals of the second component image V after the decomposition is obtained, and the parameters T l and ⁇ 2 for controlling the coring key are corrected according to the obtained color correlation.
  • the coring width force S at the coring ⁇ is reduced so that the original signal acts as i3 ⁇ 4m ".
  • the corin A wider corrugation width and flattening of the era makes it possible to use a more accurate coring process.
  • the fine relationship between each color signal of the succeeding second component image V is obtained, and the threshold value of the core link is set based on the noise amount and the color correlation.
  • the threshold value of the core link is set based on the noise amount and the color correlation.
  • the pixel block of the second component image V extracted by the extraction unit 400 is stored in the buffer 300. Said.
  • the parameters T 1 and ⁇ 2 are set based on, for example, Eq. (1 3).
  • the parameters T 1 and ⁇ 2 are the parameters 3 ⁇ 4 ⁇ part 3 0 3 ⁇ .
  • the correlation calculation unit 30 2 obtains the pixel block of the second component image V from the buffer 3 0 1 for all the color signals, and then obtains each pixel in the pixel block near the pixel block including the target pixel position. Calculates the correlation number of the touch and outputs the most / J @r ⁇ , which is the most / J, and the number r. Details of the correlation calculation will be described later.
  • the maximum / J number r is parameter 3 ⁇ 4E part 3 0 3 ⁇ 3 ⁇ 4.
  • the parameter correction unit 303 corrects the parameters T 1 and T 2 based on the maximum / J ⁇ relation number r.
  • the corrected parameters T 1 and D 2 are hereinafter referred to as D 1 and T 2.
  • Noise 0 4 is a color signal that represents the filter processing based on the edge direction obtained from the direction discriminator 2 3 4 and the coring based on the parameters T l ′ and T 2 ′′ obtained from the parameter corrector 3 0 3. The noise is done by doing independently.
  • the second component image v, and the second component image V of noise ⁇ S ⁇ iir after coring processing are weighted and added based on a predetermined ratio to suppress visually noticeable artifacts.
  • the second component images V, after weighted addition are sent to the image composition unit 10. Details of noise processing will be described later.
  • the second component image V is one-dimensional. I will explain. Since the second component image V is the fine component like texture, ⁇ component and noise, the first component image U in the m image signal I ⁇ component, so the second component image V is zero. It is the component that rises as' ⁇ as the center.
  • Fig. 13 ⁇ and 1313B show »between the pixel position of the image signal I and the signal level of each machine.
  • FIG. 13A there is no correlation between machine minutes.
  • the signal level for each ⁇ at each pixel position is as shown in Table 1.
  • Table 2 shows the number of phases between R and G, between G and B, and between B and R.
  • Table 3 shows the signal level of each color component at each pixel position.
  • the maximum / J ⁇ number r is 0.873891.
  • the coring processing threshold value is corrected in accordance with the correlation between the color components, so that noise among the fluctuation components included in the second component image V is corrected. It is possible to discriminate between the component due to the texture and the component due to the original structure of the image, such as texture, and it is possible to suppress the deterioration of the latter due to coring.
  • the play is the same as the noise iSS 3 ⁇ 4201 in the second 3 ⁇ 4 state, and the same name and number are assigned to the same configuration. However, the difference is that the second component image V,, after the coring key and the second component image V immediately after the image sequence are weighted and output.
  • FIG. 14 is a diagram illustrating an example of the configuration of the noise ig »304.
  • the noise 1S ⁇ 304 is configured as shown in Fig. 4 ⁇ ; ⁇ ⁇ ⁇ noise, ⁇ S »201 with an adder 321 i 3 ⁇ 4.
  • the parameter fIE section 303 has a coring section 221 ⁇ .
  • the coring unit 22 1 is a calorie calculating unit 32.
  • the adder 321 has an image total of 08 ⁇ .
  • the direction-specific finer processing 120 and the coring unit 221 Based on the parameters Tl, ⁇ 2, corrected by the parameter correction unit 303, the direction-specific finer processing 120 and the coring unit 221 perform direction-specific finisher processing and coring ⁇ respectively. .
  • the subsequent second component image V, ′ is added to the adder 321.
  • the calorie calculation unit 321 obtains the second component image V immediately after the image sequence from the image sequence unit 105, and based on a predetermined ratio set in advance, the second component image V and the second component image after the coring key.
  • Component Image V '' is weighted and added. This suppresses artifacts that occur when noise ⁇ S ⁇ W is excessively affected by the direction-specific filter key and coring processing. be able to.
  • FIG. 15 shows a flow when the processes from the image decomposition unit 105 to the image composition unit 108 are performed by software processing.
  • Programs used in the software can be stored in the ROM, RAM, etc. of the computer! ⁇ And executed by the computer's CPU.
  • the same step number is assigned to the same key as the signal key flow in the first difficult form.
  • step SO 1 the image signal I is input to the first component image U and the second component image V.
  • step S 10 after the threshold value indicating the edge direction is calculated based on the first component image U and the second component image V, the edge direction is determined based on M.
  • step S 13 the correlation of each color signal of the second component image V is calculated to calculate the maximum coefficient r .
  • step S 11 the noise amount of the second component image V is estimated for each pixel from the signal level of the first component image U based on the noise model, and parameters T 1 and T 2 are calculated based on the noise amount.
  • step S14 the parameters Tl, ⁇ 2 obtained in step S11 are obtained based on the maximum / J ⁇ ⁇ r r obtained in step S13 to obtain parameters Tl,, ⁇ 2,.
  • step S 03 direction-specific filtering is performed on the second component image V based on the edge direction determined in step S 10.
  • step S12 coring ⁇ is performed on the second component image V, after the direction-specific filter processing, based on the parameters T1 ′ and T2 ′ set in step S14.
  • step S15 the second component image V,, after coring and the second component image V of noise ⁇ are weighted and added at a predetermined ratio.
  • step S 05 the first component image U and the second component image V, 'after weighted addition in step S 15 are synthesized to obtain a synthesized image ⁇ .
  • the image signal I is composed of a plurality of tactile segments, and the first component image U and the second component image V that have been converted to “ ⁇ !” Of the original image signal I are buffered.
  • the first component image U and the first For the two-component image V, the direction discriminator 2 3 4 and noise 3 0 4 are used in order for coloration.Estimated by the first component image U containing the skeletal component for each mechanical component of the power error image signal The noise ⁇ ⁇ and ° parameters set based on the noise level were corrected based on the number of awakenings for each touch, and the noise ffi relationship was used. More accurate noise ie ⁇ a is possible.
  • the second component image V of noise and the second component image V,, after noise are added based on a predetermined ratio.
  • noise ig ⁇ in the direction filter, coring processing, etc., and artifacts may be generated ⁇ . It is possible to dislike the artifact that the artifact is visually conspicuous.
  • the present invention is not limited to the first to third embodiments J.
  • the device in the first to third male forms described above, an example in which the image according to the present invention is used as a device has been described.
  • the device can be applied to a device of device: ⁇ , orchid as a single unit. «It can also be set.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un appareil de traitement d'image comprenant une partie de décomposition d'image qui décompose un signal d'image d'origine en des images représentatives d'une pluralité de composants contenant à la fois une image de premier composant, qui est un composant cadre révélateur d'une structure générale d'une image comprenant les bords de l'image et les zones planes définies par ces bords, et une image de second composant obtenue par des calculs basés sur le signal d'image d'origine et basés également sur l'image de premier composant ; une partie de détermination de direction qui détermine la direction de chaque composant de bord pour l'image de premier composant ; et une partie de réduction de bruit qui réduit les bruits conformément aux directions des composants de bord pour l'image de second composant.
PCT/JP2008/071996 2007-12-25 2008-11-27 Appareil de traitement d'image, procédé de traitement d'image et programme de traitement d'image WO2009081709A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2007-331972 2007-12-25
JP2007331972 2007-12-25

Publications (1)

Publication Number Publication Date
WO2009081709A1 true WO2009081709A1 (fr) 2009-07-02

Family

ID=40801020

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/071996 WO2009081709A1 (fr) 2007-12-25 2008-11-27 Appareil de traitement d'image, procédé de traitement d'image et programme de traitement d'image

Country Status (1)

Country Link
WO (1) WO2009081709A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011015277A (ja) * 2009-07-03 2011-01-20 Olympus Corp 画像処理装置、画像処理方法、画像処理プログラムおよび画像処理プログラムが記録された記録媒体
WO2012042771A1 (fr) * 2010-09-28 2012-04-05 パナソニック株式会社 Processeur d'image, procédé de traitement d'image et circuit intégré
CN103218790A (zh) * 2013-04-25 2013-07-24 华为技术有限公司 图像滤波的方法和滤波器

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05347723A (ja) * 1992-06-15 1993-12-27 Toshiba Corp ノイズ低減回路
JPH08202870A (ja) * 1995-01-31 1996-08-09 Hitachi Ltd 画像処理方法
JPH09121366A (ja) * 1995-08-29 1997-05-06 Samsung Electron Co Ltd 色信号に含まれた輪郭を補正する方法及びこれをカラービデオ機器で具現するための回路
JP2001057677A (ja) * 1999-06-10 2001-02-27 Fuji Photo Film Co Ltd 画像処理方法および装置並びに記録媒体
WO2005081543A1 (fr) * 2004-02-19 2005-09-01 Olympus Corporation Système de formation d’images et programme de traitement d’images
WO2006101128A1 (fr) * 2005-03-22 2006-09-28 Olympus Corporation Dispositif de traitement d’image et endoscope
JP2007041834A (ja) * 2005-08-03 2007-02-15 Olympus Corp 画像処理装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05347723A (ja) * 1992-06-15 1993-12-27 Toshiba Corp ノイズ低減回路
JPH08202870A (ja) * 1995-01-31 1996-08-09 Hitachi Ltd 画像処理方法
JPH09121366A (ja) * 1995-08-29 1997-05-06 Samsung Electron Co Ltd 色信号に含まれた輪郭を補正する方法及びこれをカラービデオ機器で具現するための回路
JP2001057677A (ja) * 1999-06-10 2001-02-27 Fuji Photo Film Co Ltd 画像処理方法および装置並びに記録媒体
WO2005081543A1 (fr) * 2004-02-19 2005-09-01 Olympus Corporation Système de formation d’images et programme de traitement d’images
WO2006101128A1 (fr) * 2005-03-22 2006-09-28 Olympus Corporation Dispositif de traitement d’image et endoscope
JP2007041834A (ja) * 2005-08-03 2007-02-15 Olympus Corp 画像処理装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Eizo Media Shori Symposium Dai 11 Kai Symposium Shiryo, 08 November, 2006 (08.11.06)", article YUKI ISHII ET AL.: "Josangata Kokkaku/ Texture Bunri no Zatsuon Taisei to Gazo Zatsuon Jokyo eno Oyo", pages: 29 - 30 *
YUKI ISHII ET AL.: "Josangata Kokkaku/ Texture Gazo Bunri no Gazo Shori eno Oyo", THE TRANSACTIONS OF THE INSTITUTE OF ELECTROS, vol. J90-D, no. 7, 1 July 2007 (2007-07-01), pages 1682 - 1685 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011015277A (ja) * 2009-07-03 2011-01-20 Olympus Corp 画像処理装置、画像処理方法、画像処理プログラムおよび画像処理プログラムが記録された記録媒体
WO2012042771A1 (fr) * 2010-09-28 2012-04-05 パナソニック株式会社 Processeur d'image, procédé de traitement d'image et circuit intégré
US8693801B2 (en) 2010-09-28 2014-04-08 Panasonic Corporation Image processing device, image processing method, and integrated circuit
JP5758908B2 (ja) * 2010-09-28 2015-08-05 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America 画像処理装置、画像処理方法及び集積回路
CN103218790A (zh) * 2013-04-25 2013-07-24 华为技术有限公司 图像滤波的方法和滤波器
CN103218790B (zh) * 2013-04-25 2016-01-27 华为技术有限公司 图像滤波的方法和滤波器

Similar Documents

Publication Publication Date Title
EP1289310A3 (fr) Méthode et dispositif de démosaiquage adaptif
EP1855486B1 (fr) Processeur d'image corrigeant le decalage des couleurs, programme et procede de traitement d'image et camera electronique
US8115833B2 (en) Image-acquisition apparatus
JP4526445B2 (ja) 撮像装置
JP5144202B2 (ja) 画像処理装置およびプログラム
US8310566B2 (en) Image pickup system and image processing method with an edge extraction section
JP5012967B2 (ja) 画像処理装置及び方法、並びにプログラム
JP6976733B2 (ja) 画像処理装置、画像処理方法、およびプログラム
JP4523008B2 (ja) 画像処理装置および撮像装置
US8774551B2 (en) Image processing apparatus and image processing method for reducing noise
JP5620343B2 (ja) 物体座標系変換装置、物体座標系変換方法、及び物体座標系変換プログラム
GB2547842A (en) Image processing device and method, image pickup device, program, and recording medium
WO2009081709A1 (fr) Appareil de traitement d'image, procédé de traitement d'image et programme de traitement d'image
JP3072766B2 (ja) スカラー・データ処理方式
JP6757407B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP2005517315A (ja) 色域圧縮
WO2008020487A1 (fr) dispositif de traitement d'image, programme DE TRAITEMENT D'IMAGE, et procédé de traitement d'image
JP2007026334A (ja) 画像処理装置
JP7437921B2 (ja) 画像処理装置、画像処理方法、及びプログラム
JP4687667B2 (ja) 画像処理プログラムおよび画像処理装置
JP2005182232A (ja) 輝度補正装置および輝度補正方法
JPH09270005A (ja) エッジ画像処理装置およびエッジ画像処理方法
JP5277388B2 (ja) 画像処理装置および撮像装置
TWI242752B (en) Method of using compressed image as bases to separate figures and texts for enhancing printing quality
JP3749533B2 (ja) 画像処理における輪郭強調又は平滑化の自動化方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08865130

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08865130

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP