US20170046846A1 - Image processing system and microscope system including the same - Google Patents

Image processing system and microscope system including the same Download PDF

Info

Publication number
US20170046846A1
US20170046846A1 US15/338,852 US201615338852A US2017046846A1 US 20170046846 A1 US20170046846 A1 US 20170046846A1 US 201615338852 A US201615338852 A US 201615338852A US 2017046846 A1 US2017046846 A1 US 2017046846A1
Authority
US
United States
Prior art keywords
unit
band
image
images
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/338,852
Inventor
Nobuyuki Watanabe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2012073354A external-priority patent/JP5914092B2/en
Priority claimed from JP2012075081A external-priority patent/JP5868758B2/en
Application filed by Olympus Corp filed Critical Olympus Corp
Priority to US15/338,852 priority Critical patent/US20170046846A1/en
Publication of US20170046846A1 publication Critical patent/US20170046846A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06T7/0069
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • H04N13/268Image signal generators with monoscopic-to-stereoscopic image conversion based on depth image-based rendering [DIBR]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/571Depth or shape recovery from multiple images from focus
    • H04N13/0235
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/236Image signal generators using stereoscopic image cameras using a single 2D image sensor using varifocal lenses or mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • H04N5/2256

Definitions

  • the present invention relates to an image processing system and a microscope system including the same.
  • In-focus evaluation based on a contrast is used not only for an autofocus function but also, for example, to acquire the depth information of an object.
  • the depth information is acquired by, for example, capturing an object at a plurality of focus positions and then selecting an in-focus image from the plurality of images for each position.
  • the depth information is used when capturing an object at a plurality of focus positions, selecting an in-focus image from the plurality of images for each position of the object, and synthesizing the in-focus images to create an all-in-focus image or a 3D reconstructed image.
  • a best-in-focus image is selected from a plurality of images having different focal planes for each position in an image, and the 3D shape of the sample is estimated.
  • optimization processing needs to be performed for the estimated value of the 3D shape.
  • This optimization processing can include reducing estimation errors of isolated points based on the correlation between pixels.
  • the optimization processing can also include estimating the sample shape for a position where the above-described selection cannot be done.
  • Jpn. Pat. Appln. KOKAI Publication No. 9-298682 discloses a technique concerning a microscope system for creasing an all-in-focus image.
  • Jpn. Pat. Appln. KOKAI Publication No. 9-298682 discloses performing processing using a recovery filter after all-in-focus image creation.
  • the frequency band of an image generally changes depending on the optical system, magnification, the characteristics of the object, and the like used to acquire the image.
  • the coefficient of the recovery filter is determined in accordance with the settings of the optical system, including the magnification and the numerical aperture of the objective lens, in consideration of the change in the band of the optical system.
  • Jpn. Pat. Appln. KOKAI Publication No. 2010-166247 discloses a technique of judging an in-focus state based on a contrast and creating an all-in-focus image based on an in-focus image. Jpn. Pat. Appln. KOKAI Publication No. 2010-166247 also discloses a technique concerning controlling the characteristics of a filter configured to restrict a high frequency so as to obtain a predetermined contrast even in an out-of-focus region.
  • an image processing system includes an image acquisition unit configured to acquire a plurality of images obtained by capturing a single object at different focus positions; a candidate value estimation unit configured to estimate, for each pixel of the images, a candidate value of a 3D shape based on the plurality of images; a band characteristics evaluation unit configured to calculate, for each pixel of the images, a band evaluation value of a band included in the images for each of a plurality of frequency bands; an effective frequency determination unit configured to determine an effective frequency of the pixel based on statistical information of the band evaluation value; and a candidate value modification unit configured to perform at least one of data correction and data interpolation for the candidate value based on the effective frequency and calculate a modified candidate value representing the 3D shape of the object.
  • a microscope system includes a microscope optical system; an imaging unit configured to acquire an image of a sample via the microscope optical system as a sample image; and the above described image processing system which is configured to acquire the sample image as the image.
  • FIG. 1 is a block diagram showing an example of a configuration of an image processing system according to first and second embodiments
  • FIG. 2 is a view showing an example of a frequency characteristic of a filter bank of a band processing unit according to the first and second embodiments;
  • FIG. 3 is a view showing another example of a frequency characteristic of a filter bank of band processing unit according to the first and second embodiments;
  • FIG. 4 is a flowchart showing an example of processing of the image processing system according to the first embodiment
  • FIG. 5 is a flowchart showing an example of noise/isolated point removal processing according to the first embodiment
  • FIG. 6A is a view showing an example of an original signal corresponding to a shape candidate value so as to explain coring processing
  • FIG. 6B is a view showing an example of a moving average and a threshold so as to explain coring processing
  • FIG. 6C is a view showing an example of a result of coring processing so as to explain coring processing
  • FIG. 7 is a flowchart showing an example of interpolation processing according to the first embodiment
  • FIG. 8 is a block diagram showing an example of a configuration of a microscope system according to a third embodiment
  • FIG. 9 is a block diagram showing an example of a configuration of an image processing system according to a fourth embodiment.
  • FIG. 10 is a flowchart showing an example of processing of the image processing system according to the fourth embodiment.
  • FIG. 11 is a view to explain wavelet transformation
  • FIG. 12 is a flowchart showing an example of processing of an image processing system according to a modification of the fourth embodiment.
  • FIG. 13 is a block diagram showing an example of a configuration of a microscope system according to a fifth embodiment.
  • FIG. 1 shows the outline of an example of the configuration of an image processing system 100 according to this embodiment.
  • the image processing system 100 comprises an image acquisition unit 110 , a band processing unit 120 , a band characteristics evaluation unit 130 , an effective frequency determination unit 140 , a candidate value estimation unit 150 , a data modification unit 160 , a 3D shape estimation unit 170 and an image synthesis unit 180 .
  • the effective frequency determination unit 140 includes a statistical information calculation unit 142 and a parameter determination unit 144 .
  • the candidate value estimation unit 150 includes a contrast evaluation unit 152 and a shape candidate estimation unit 154 .
  • the data modification unit 160 includes a data correction unit 162 and a data interpolation unit 164 .
  • the image acquisition unit 110 includes a storage unit 114 .
  • the image acquisition unit 110 acquires a plurality of images obtained by capturing a single object while changing the focus position and stores them in the storage unit 114 .
  • Each of the images is assumed to include information about the focus position of the optical system, that is, information about the depth at the time of image acquisition.
  • the image acquisition unit 110 outputs the images in response to requests from the band processing unit 120 , the shape candidate estimation unit 154 , and the image synthesis unit 180 .
  • the band processing unit 120 has a filter bank. That is, the band processing unit 120 includes, for example, a first filter 121 , a second filter 122 , and a third filter 123 .
  • FIG. 2 shows the frequency characteristics of the first filter 121 , the second filter 122 , and the third filter 123 . As shown in FIG. 2 , these filters are low-pass filters, and the cutoff frequency becomes high in the order of the first filter 121 , the second filter 122 , and the third filter 123 . That is, the filters pass different signal frequency bands.
  • the first filter 121 , the second filter 122 , and the third filter 123 may be bandpass filters having frequency characteristics as shown in FIG. 3 .
  • the band processing unit 120 includes three filters. However, an arbitrary number of filters can be used.
  • the band processing unit 120 acquires the images from the image acquisition unit 110 , and performs filter processing for each region (for example, each pixel) of each of the plurality of images at different focus positions using the first filter 121 , the second filter 122 , and the third filter 123 .
  • the following description will be made assuming that the processing is performed for each pixel. However, the processing may be performed for each region including a plurality of pixels.
  • the band processing unit 120 outputs the result of the filter processing to the band characteristics evaluation unit 130 .
  • the band characteristics evaluation unit 130 calculates a band evaluation value for each pixel of the plurality of images that have undergone the filter processing.
  • the band evaluation value is obtained by, for example, calculating the integrated value of the signals that have passed the filters.
  • the band evaluation value is thus obtained for each pixel and each frequency band in each image.
  • the band characteristics evaluation unit 130 outputs the calculated band evaluation value to the statistical information calculation unit 142 in the effective frequency determination unit 140 and the contrast evaluation unit 152 in the candidate value estimation unit 150 .
  • the statistical information calculation unit 142 in the effective frequency determination unit 140 calculates, for each frequency band, a statistical information value having a relationship to the average of the band evaluation values of the plurality of images at different focus positions. The statistical information will be described later.
  • the statistical information calculation unit 142 outputs the calculated statistical information value to the parameter determination unit 144 .
  • the parameter determination unit 144 in the effective frequency determination unit 140 calculates an effective frequency based on the statistical information value input from the statistical information calculation unit 142 .
  • the parameter determination unit 144 also calculates, based on the effective frequency, a correction parameter used by the data correction unit 162 in the data modification unit 160 and an interpolation parameter used by the data interpolation unit 164 in the data modification unit 160 . Calculation of the correction parameter and the interpolation parameter will be described later.
  • the parameter determination unit 144 outputs the calculated correction parameter to the data correction unit 162 in the data modification unit 160 and the interpolation parameter to the data interpolation unit 164 in the data modification unit 160 .
  • the frequency determination can be done using a filter bank as in this embodiment or data based on frequency analysis by orthogonal basis such as wavelet transformation.
  • the contrast evaluation unit 152 in the candidate value estimation unit 150 evaluates the strength of a high-frequency component for each pixel of the plurality of images based on the band evaluation value input from the band characteristics evaluation unit 130 and calculates a contrast evaluation value. To calculate the contrast evaluation value, the contrast evaluation unit 152 can use one of the plurality of band evaluation values calculated by the band characteristics evaluation unit 130 or the plurality of band evaluation values. The contrast evaluation unit 152 outputs the calculated contrast evaluation value for each pixel of each image to the shape candidate estimation unit 154 .
  • the shape candidate estimation unit 154 provided in the candidate value estimation unit 150 evaluates the in-focus state of each pixel of each of the plurality of images based on the contrast evaluation value input from the contrast evaluation unit 152 .
  • the shape candidate estimation unit 154 selects the best-in-focus image out of the plurality of images having different focal position for each pixel of the image.
  • the shape candidate estimation unit 154 acquires, from the image acquisition unit 110 , the information of the focal position when the best-in-focus image has been captured, estimates the depth of the sample corresponding to each pixel of the image based on the information, and calculates a shape candidate value that is information as the estimation value of the 3D shape of the object.
  • the shape candidate estimation unit 154 For a pixel for which the depth of the object could not be estimated based on the contrast evaluation value, the shape candidate estimation unit 154 sets a value representing inestimability as the shape candidate value corresponding to the pixel. The shape candidate estimation unit 154 outputs each calculated shape candidate value to the data correction unit 162 in the data modification unit 160 .
  • the data correction unit 162 provided in the data modification unit 160 performs noise coring for the shape candidate values input from the shape candidate estimation unit 154 to remove noise of the shape candidate values.
  • the data correction unit 162 uses the correction parameters input from the parameter determination unit 144 , as will be described later in detail.
  • the data correction unit 162 outputs, to the data interpolation unit 164 , noise-removed shape candidate values that are shape candidate values having undergone noise removal.
  • the data interpolation unit 164 provided in the data modification unit 160 interpolates data for each pixel having a value representing inestimability out of the noise-removed shape candidate values input from the data correction unit 162 .
  • the data interpolation unit 164 uses the interpolation parameters input from the parameter determination unit 144 , as will be described later in detail.
  • the data interpolation unit 164 outputs, to the 3D shape estimation unit 170 , interpolated shape candidate values that are shape candidate values having undergone noise removal and interpolation of the values of the inestimable pixels.
  • the 3D shape estimation unit 170 optimizes depth information based on the interpolated shape candidate values input from the data interpolation unit 164 , and determines the estimated value of the 3D shape of the object.
  • the 3D shape estimation unit 170 outputs the determined 3D shape of the object to the image synthesis unit 180 .
  • the image synthesis unit 180 synthesizes a synthesized image by combining the plurality of images having different focal position based on the 3D shape of the object input from the 3D shape estimation unit 170 and the plurality of images acquired from the image acquisition unit 110 .
  • This synthesized image is, for example, a 3D reconstructed image or an all-in-focus image.
  • the image synthesis unit 180 outputs the created synthesized image to, for example, a display unit to display it, or outputs the synthesized image to, for example, a storage device to store it.
  • step S 101 the image acquisition unit 110 acquires a plurality of images obtained by capturing a single object while changing the focus position. Each of the images is assumed to include information about the depth such as information about the focus position of the optical system at the time of acquiring the image.
  • the image acquisition unit 110 stores the acquired images in the storage unit 114 .
  • step S 102 the band processing unit 120 performs filter processing for each pixel of the plurality of images at different focus positions stored in the storage unit 114 using, for example, the first filter 121 , the second filter 122 , and the third filter 123 .
  • An arbitrary number of filters can be used. Hence, the following description will be made assuming that the band processing unit 120 includes N filters.
  • the band processing unit 120 outputs the result of the filter processing to the band characteristics evaluation unit 130 .
  • the band evaluation value Q(h, f n , i, j) is calculated as, for example, the integrated value of the signals that have passed the filters, which is an amount corresponding to the amplitude in each band the filter passes.
  • the band characteristics evaluation unit 130 outputs the band evaluation value Q(h, f n , i, j) to the statistical information calculation unit 142 and the contrast evaluation unit 152 .
  • step S 104 the statistical information calculation unit 142 calculates, for each frequency band, a statistical information value representing the average of the band evaluation values Q(h, f n , j) of the plurality of images at different focus positions.
  • the statistical information value is represented by, for example, the average L(f n , i, j) given by
  • the statistical information calculation unit 142 outputs the calculated statistical information value to the parameter determination unit 144 .
  • step S 105 the parameter determination unit 144 determines an effective frequency f ⁇ .
  • the effective frequency f ⁇ is determined in, for example, the following way.
  • a variable L N (f n , i, j) is set to 1 or 0 depending on whether the value concerning the average L(f n , i, j) meets a predetermined condition. That is, the variable L N (f n , i, j) is given by, for example,
  • the effective frequency f ⁇ is determined using the variable L N (f n , i, j). For example, counting is performed from the low frequency side, that is, n is sequentially increased from the low frequency side, and f m-1 relative to a minimum frequency f m meeting
  • ⁇ m 1 n ⁇ L N ⁇ ( f m , i , j ) ⁇ n ( 3 )
  • step S 106 the parameter determination unit 144 determines correction parameters m, n, and w(k, l) to be used by the data correction unit 162 , and interpolation parameters ⁇ k and ⁇ l to be used by the data interpolation unit 164 based on the effective frequency f ⁇ .
  • the parameter determination unit 144 stores, for example, a lookup table representing the relationship between the effective frequency f ⁇ and correction parameters m, n, and w(k, l).
  • the parameter determination unit 144 determines correction parameters m, n, and w(k, l) based on the effective frequency f ⁇ by looking up the lookup table. The lower the effective frequency f ⁇ is, the larger the values of correction parameters m and n are.
  • correction parameter w(k, l) a function that does not decrease the weight when the values m and n are large is given in equation (7), to be described later.
  • the parameter determination unit 144 outputs the determined correction parameters m, n, and w(k, l) to the data correction unit 162 .
  • the dimension of the distance and the dimension of the frequency hold a reciprocal relationship.
  • the parameter determination unit 144 may obtain correction parameters m and n by
  • int is integerization processing
  • C 1 and C 2 are arbitrary coefficients.
  • a function generally having a negative correlation may be used.
  • the parameter determination unit 144 also determines interpolation parameters ⁇ k and ⁇ l to be used by the data interpolation unit 164 based on the effective frequency f ⁇ .
  • the parameter determination unit 144 stores, for example, a lookup table representing the relationship between the effective frequency f ⁇ and the interpolation parameters ⁇ k and al.
  • the parameter determination unit 144 determines the interpolation parameters ⁇ k and ⁇ l based on the effective frequency f ⁇ by looking up the lookup table. The lower the effective frequency f ⁇ is, the larger the values of the interpolation parameters ⁇ k and ⁇ l are.
  • the parameter determination unit 144 outputs the determined interpolation parameters ⁇ k and ⁇ l to the data interpolation unit 164 .
  • the interpolation parameters ⁇ k and ⁇ l represent the variance.
  • the variance has the dimension of the distance.
  • the parameter determination unit 144 may obtain the interpolation parameters ⁇ k and ⁇ l by
  • the contrast evaluation unit 152 acquires the band evaluation value Q(h, f n , i, j) from the band characteristics evaluation unit 130 , evaluates the strength of a high-frequency component for each pixel of the plurality of images, and calculates a contrast evaluation value.
  • the contrast evaluation unit 152 outputs the calculated contrast evaluation value for each pixel of each image to the shape candidate estimation unit 154 .
  • step S 108 the shape candidate estimation unit 154 evaluates the in-focus state of each pixel of the plurality of images based on the contrast evaluation value input from the contrast evaluation unit 152 . For example, the higher the contrast is, the higher the shape candidate estimation unit 154 evaluates the degree of in-focus.
  • the shape candidate estimation unit 154 also selects a best-in-focus image from the plurality of images having different focal planes for each pixel of an image.
  • the shape candidate estimation unit 154 acquires, from the image acquisition unit 110 , information of the focus position at the time of capture of the best-in-focus image.
  • the shape candidate estimation unit 154 estimates the depth of the object corresponding to each pixel of the image based on the information acquired from the image acquisition unit 110 , and calculates a shape candidate value P(i, j) that is information about the shape of the object.
  • the shape candidate value P(i, j) represents the depth of the object at, for example, coordinates (i, j). If the depth of the object could not be estimated based on the contrast evaluation value, the shape candidate estimation unit 154 sets a value representing inestimability as the shape candidate value P(i, j) corresponding to the pixel.
  • the shape candidate estimation unit 154 outputs the calculated shape candidate value P(i, j) to the data correction unit 162 in the data modification unit 160 .
  • step S 109 the data correction unit 162 performs noise/isolated point removal processing of removing noise and isolated points from the shape candidate value P(i, j).
  • the noise/isolated point removal processing is performed by coring processing. The noise/isolated point removal processing will be explained with reference to the flowchart shown in FIG. 5 .
  • step S 210 the data correction unit 162 loads the shape candidate value P(i, j).
  • the image is assumed to have a size of (p+1) pixels from 0 to p in the horizontal direction and a size of (q+1) pixels from 0 to q in the vertical direction.
  • step S 220 the data correction unit 162 loads correction parameters m, n, and w(k, l).
  • step S 231 the data correction unit 162 calculates a reference value P ave (i, j, m, n) of a region including (i, j) based on
  • the reference value P ave (i, j, m, n) indicates the average value in this region.
  • correction parameters m, n, and w(k, l) determined by the parameter determination unit 144 are used. That is, equation (7) changes in accordance with the effective frequency f ⁇ .
  • step S 232 the data correction unit 162 determines whether the difference between the shape candidate value P(i, j) and the reference value P ave (i, j, m, n) is smaller than a predetermined threshold. If the difference between the shape candidate value P(i, j) and the reference value P ave (i, j, m, n) is smaller than a predetermined threshold Th r-1 , that is, if “
  • the threshold Th r-1 is defined based on an empirical rule such as a criterion to determine whether the difference falls within the error range of the reference value or not.
  • the data correction unit 162 determines, in step S 233 , whether the shape candidate value P(i, j) is an isolated point. If the shape candidate value P(i, j) is an isolated point, the process goes to step S 234 .
  • Th r-2 is a threshold which is set based on the variance in a predetermined region of a plurality of pixels. More specifically, for example, when the variance is ⁇ , Th r-2 is set as ⁇ 2 ⁇ .
  • step S 234 the data correction unit 162 sets the value of the shape candidate value P(i, j) to the reference value P ave (i, j, m, n).
  • the processes in steps S 231 to S 234 are performed for all pixels. That is, letting ⁇ T be the predetermined threshold, this processing is represented by a noise-removed shape candidate value P′(i, j) that is the shape candidate value after the processing and given by
  • P ′ ⁇ ( i , j ) ⁇ P ⁇ ( i , j ) ⁇ : ⁇ ⁇ P ⁇ ( i , j ) - P ave ⁇ ( i , j , m , n ) ⁇ ⁇ ⁇ ⁇ ⁇ T P ave ⁇ ( i , j , m , n ) ⁇ : ⁇ ⁇ P ⁇ ( i , j ) - P ave ⁇ ( i , j , m , n ) ⁇ ⁇ ⁇ ⁇ T . ( 8 )
  • FIG. 6A shows an original signal corresponding to the shape candidate value P(i, j).
  • a moving average corresponding to the average value calculated by equation (7) for the original signal is indicated by the dashed-dotted line in FIG. 6B .
  • a value obtained by adding or subtracting a threshold corresponding to the predetermined threshold ⁇ T to or from the moving average is indicated by a broken line in FIG. 6B .
  • equation (8) when the original signal is located between the two broken lines in FIG. 6B , the original signal is replaced with the moving average indicated by the dashed-dotted line.
  • a result as shown in FIG. 6C is obtained.
  • a circle indicates a value replaced with the moving average.
  • the coring processing has the effect of suppressing a variation component determined as a small amplitude signal and deleting an error.
  • the data correction unit 162 outputs the value obtained by performing the noise/isolated point removal processing described with reference to FIG. 5 for the shape candidate value P(i, j), that is, the noise-removed shape candidate value P′(i, j) to the data interpolation unit 164 .
  • step S 110 the data interpolation unit 164 performs interpolation processing, i.e., the data interpolation unit 164 interpolates data whose noise-removed shape candidate value P′(i, j) input from the data correction unit 162 represents inestimability.
  • Inestimability means that the shape candidate estimation unit 154 could not specify the in-focus state of an image when calculating the shape candidate value P(i, j) based on the contrast evaluation value calculated by the contrast evaluation unit 152 . That is, inestimability indicates that the contrast evaluation value of any of a plurality of microscopic images for a pixel of interest does not meet a condition representing a predetermined in-focus state.
  • the data interpolation unit 164 interpolates the inestimable data using neighboring data. At this time, the data interpolation unit 164 can use, for example, bilinear interpolation or bicubic interpolation for the data interpolation.
  • the data interpolation unit 164 interpolates the inestimable data based on a function representing the correlation to neighboring data. That is, the distribution around the inestimable portion is assumed, thereby estimating the value of the portion.
  • kernel regression method is used in interpolation.
  • the data interpolation unit 164 uses the interpolation parameters ⁇ k and ⁇ l input from the parameter determination unit 144 . An example of the interpolation processing will be described with reference to the flowchart of FIG. 7 .
  • step S 310 the data interpolation unit 164 loads the noise-removed shape candidate value P′(i, j).
  • step S 320 the data interpolation unit 164 loads the interpolation parameters ⁇ k and ⁇ l .
  • the data interpolation unit 164 calculates interpolation data R(i, j).
  • the interpolation data R(i, j) is given by
  • R ⁇ ( i , j ) 1 N ⁇ ⁇ P ′ ⁇ ( i + k , j + l ) ⁇ 0 ⁇ P ′ ⁇ ( i + k , j + l ) ⁇ C ⁇ ( k , l ) , ( 9 )
  • N is the number of sampling points which is given by
  • C(k, l) is determined in accordance with the interpolation parameters ⁇ k and ⁇ l .
  • B is a variable number.
  • step S 331 the data interpolation unit 164 updates the variable B.
  • step S 332 the data interpolation unit 164 superimposes a Gaussian kernel on the noise-removed shape candidate value P′(i, j) based on equations (9) to (11).
  • step S 333 the data interpolation unit 164 determines whether the value obtained in step S 332 meets a predetermined convergence condition which is, for example, given by
  • Thr is a predetermined threshold. If the value meets the convergence condition, the process goes to step S 340 . On the other hand, if the value does not meet the convergence condition, the processes in steps S 331 to S 333 are repeated up to a predetermined count D. That is, the interpolation data R(i, j) for each variable B is calculated in step S 332 , and it is determined in step S 333 whether the calculated interpolation data R(i, j) meets the convergence condition until the convergence condition is met while changing the value of the variable B in step S 331 .
  • step S 340 Upon determining in step S 333 that the interpolation data R(i, j) meets the convergence condition, in step S 340 , the data interpolation unit 164 generates expansion data based on the interpolation data R(i, j) that meets the convergence condition. In step S 350 , the data interpolation unit 164 assigns the generated expansion data to the inestimable data of the noise-removed shape candidate values P′(i, j), thereby generating an interpolated shape candidate value P′′(i, j). The data interpolation unit 164 outputs the generated interpolated shape candidate value P′′(i, j) to the 3D shape estimation unit 170 .
  • the 3D shape estimation unit 170 optimizes depth information based on the interpolated shape candidate value P′′(i, j) input from the data interpolation unit 164 , and estimates the 3D shape of the object.
  • the 3D shape estimation unit 170 outputs the estimated 3D shape of the sample to the image synthesis unit 180 .
  • the image synthesis unit 180 synthesizes a synthesized image by combining the plurality of images having different focal position based on the 3D shape of the object input from the 3D shape estimation unit 170 and the plurality of images acquired from the image acquisition unit 110 . If the synthesized image is, for example, a 3D reconstructed image, the synthesized image is created by synthesizing the 3D shape with the in-focus images concerning the respective portions of the 3D shape. If the synthesized image is, for example, an all-in-focus image, images extracted from images having focal position corresponding to the depth of the respective pixels are combined, thereby synthesizing an image that is in focus for all pixels. The image synthesis unit 180 outputs the created synthesized image to a display unit or a storage device.
  • the image acquisition unit 110 functions as an image acquisition unit configured to acquire a plurality of images obtained by capturing a single object at different focus positions.
  • the candidate value estimation unit 150 functions as a candidate value estimation unit configured to estimate, for each pixel of the images, a candidate value of a 3D shape based on the plurality of images.
  • the band characteristics evaluation unit 130 functions as a band characteristics evaluation unit configured to calculate, for each pixel of the plurality of images, the band evaluation value of a band included in the images for each of a plurality of frequency bands.
  • the effective frequency determination unit 140 functions as an effective frequency determination unit configured to determine the effective frequency of the pixel based on statistical information of the band evaluation value.
  • the data modification unit 160 functions as a candidate value modification unit configured to perform at least one of data correction and data interpolation for the candidate value based on the effective frequency and calculate a modified candidate value representing the 3D shape of the object.
  • the data correction unit 162 functions as a modified candidate value calculation unit configured to calculate a modified candidate value using correlation of the value of a local region represented by the candidate value.
  • the 3D shape estimation unit 170 functions as an all-in-focus image creation unit or a 3D reconstructed image creation unit.
  • correction parameters m and n used in the noise/isolated point removal processing are determined based on the effective frequency f ⁇ of the images.
  • the reference value P ave (i, j, m, n) is calculated based on the shape candidate values P(i, j) in a wider region.
  • the reference value P ave (i, j, m, n) is calculated based on the shape candidate values P(i, j) in a narrower region. That is, the optimum reference value P ave (i, j, m, n) is calculated in accordance with the effective frequency f ⁇ of the images.
  • the shape candidate values P(i, j) are not excessively smoothed. Even if many noise components exist, the input signal is not excessively evaluated as a high frequency signal.
  • the interpolation processing of the data interpolation unit 164 information of the effective frequency f ⁇ of the images is used when assuming the correlation of neighboring data. That is, an optimized Gaussian kernel corresponding to the frequency band can be generated, and the value of the depth of the object at a position, which is inestimable based on the contrast evaluation value, can be estimated.
  • the interpolation parameters ⁇ k and ⁇ l are given based on the effective frequency f ⁇ . It is therefore possible to increase the processing speed due to the small calculation amount and prevent the calculation result from converging to an incorrect value by comparison with a case in which the convergence value is searched for while changing the values of the interpolation parameters.
  • the interpolation data R(i, j) is calculated based on the noise-removed shape candidate values P′(i, j) in a wider region.
  • the interpolation data R(i, j) is calculated based on the noise-removed shape candidate values P′(i, j) in a narrower region. That is, the noise-removed shape candidate values P′(i, j) are not excessively smoothed. Edge structure evaluation is appropriately done. Even if many noise components exist, the input signal is not excessively evaluated as a high frequency signal.
  • equations are merely examples. Not these equations but any other equations may be used as a matter of course as far as the above-described effects can be obtained.
  • polynomials of real number order, logarithmic functions, or exponential functions are usable in place of equations (5) and (6).
  • a variance or the like is also usable in place of equation (1).
  • the processing is performed for each pixel. However, the processing may be performed for each region including a plurality of pixels.
  • the interpolation parameters ⁇ k and ⁇ l are set to ⁇ k and ⁇ l in equation (11), and these values remain unchanged in the loop processing of steps S 331 to S 333 described with reference to FIG. 7 .
  • the convergence value is searched for while changing ⁇ k and ⁇ l as well in step S 331 .
  • the parameter determination unit 144 outputs a range or probability density function capable of setting the interpolation parameters ⁇ k and ⁇ l to the data interpolation unit 164 .
  • the data interpolation unit 164 searches for the convergence value while changing ⁇ k and ⁇ l as well based on the range or probability density function capable of setting the interpolation parameters ⁇ k and ⁇ l and input from the parameter determination unit 144 .
  • the rest of the operation is the same as in the first embodiment.
  • the interpolation data R(i, j) can converge to a convergence value more suitable than in the first embodiment.
  • the parameter determination unit 144 determines the range or probability density function capable of setting the interpolation parameters ⁇ k and ⁇ l based on the effective frequency f ⁇ of the images. Hence, the same effects as in the first embodiment can be obtained.
  • a data correction unit 162 uses a bilateral filter to remove noise.
  • the bilateral filter used in this embodiment is expressed as
  • C(k, l) is a factor that specifies the distance correlation
  • S(P 1 ⁇ P 2 ) is a factor that specifies correlation resulting from the pixel level difference between different pixels.
  • the sharpness and the signal-to-noise ratio of a generated image change depending on what kind of distribution function is used for C(k, l) and S(P 1 ⁇ P 2 ).
  • C(k, l) functions based on a Gaussian distribution is used for the C(k, l) and S(P 1 ⁇ P 2 ). That is, C(k, l) is given by, for example,
  • ⁇ p is a correction parameter
  • C 6 is a predetermined constant.
  • a parameter determination unit 144 determines even correction parameter ⁇ p based on a effective frequency f ⁇ of the images by looking up a lookup table. The lower the effective frequency f ⁇ is, the larger the value of correction parameter ⁇ p is.
  • the parameter determination unit 144 may obtain the correction parameter ⁇ p using an Mth-order (M is an integer greater than 0) polynomial as given by
  • the original sharpness of the images can be estimated.
  • C(k, l) is set so as to emphasize long-distance correlation
  • S(P 1 ⁇ P 2 ) is set based on the assumption that no abrupt step is generated with respect to neighboring data.
  • S(P 1 ⁇ P 2 ) functions as first correlation that is correlation between the values of two points spaced apart.
  • C(k, l) functions as second correlation that is correlation by the distance.
  • information of the original frequency band of the images is used when assuming the correlation of neighboring data.
  • the bilateral filter is set based on the correlation of neighboring data. According to this embodiment, it is consequently possible to acquire a noise-removed shape candidate value P′(i, j) by effectively reducing noise and errors of a shape candidate value P(i, j).
  • correction parameters ⁇ k , ⁇ l , and ⁇ p may be set as a probability density function, as in the modification of the first embodiment. In this case as well, the same effects as in this embodiment can be obtained.
  • the data correction unit 162 uses a trilateral filter to remove noise.
  • the trilateral filter used in this modification is expressed as
  • N(i, j, k, l) is given by
  • N ⁇ ( i , j , k , l ) ⁇ 1 if ⁇ ⁇ ⁇ U ⁇ ( i + k , j + l ) - U ⁇ ( i , j ) ⁇ ⁇ Thr 0 else , ( 19 )
  • U(i, j) i is the horizontal component of the gradient
  • U(i, j) j is the vertical component of the gradient
  • This trilateral filter applies the bilateral filter used in the second embodiment to a gradient ⁇ P(i, j). Introducing ⁇ P(i, j) allows to strongly suppress impulse noise, that is, an isolated variation component.
  • the third embodiment of the present invention will be described. Points of difference from the first embodiment will be explained here. The same reference numbers denote the same parts, and a description thereof will be omitted.
  • the third embodiment shows a microscope system 200 comprising the image processing system 100 according to the first embodiment.
  • FIG. 8 shows the outline of an example of the configuration of the microscope system 200 according to this embodiment.
  • the microscope system 200 includes a microscope 210 and the image processing system 100 according to the first embodiment.
  • the microscope 210 is, for example, a digital microscope.
  • the microscope 210 includes an LED light source 211 , an illumination optical system 212 , an optical path control element 213 , an objective lens 214 , a sample surface 215 placed on a stage (not shown), an observation optical system 218 , an imaging plane 219 , an imaging unit 220 , and a controller 222 .
  • the observation optical system 218 includes a zoom optical system 216 and an imaging optical system 217 .
  • the objective lens 214 , the optical path control element 213 , the zoom optical system 216 , and the imaging optical system 217 are arranged in this order on the observation optical path from the sample surface 215 to the imaging plane 219 .
  • Illumination light emitted by the LED light source 211 enters the optical path control element 213 via the illumination optical system 212 .
  • the optical path control element 213 reflects the illumination light toward the objective lens 214 on the observation optical path.
  • the illumination light irradiates a sample placed on the sample surface 215 via the objective lens 214 .
  • the sample When irradiated with the illumination light, the sample generates observation light.
  • the observation light is reflected light, fluorescence, or the like.
  • the observation light enters the optical path control element 213 .
  • the optical path control element 213 passes the observation light and makes it enter the observation optical system 218 including the zoom optical system 216 and the imaging optical system 217 .
  • the optical path control element 213 is an optical element that reflects or passes incident light in accordance with its characteristic.
  • a polarizer such as a wire grid or a polarizing beam splitter (PBS), which reflects or passes incident light in accordance with its polarization direction is used.
  • PBS polarizing beam splitter
  • a dichroic mirror that reflects or passes incident light in accordance with its frequency may be used.
  • the observation optical system 218 condenses the observation light on the imaging plane 219 , and forms an image of the sample on the imaging plane 219 .
  • the imaging unit 220 generates an image signal based on the image formed on the imaging plane 219 , and outputs the image signal as a microscopic image to the image acquisition unit 110 .
  • the controller 222 controls the operations of the microscope 210 .
  • the microscope 210 acquires a plurality of microscopic images of a single sample captured on different focal planes.
  • the controller 222 causes the imaging unit 220 to acquire the image of the sample on each focal plane while controlling the optical system of the microscope 210 to gradually change the focal plane.
  • the controller 222 causes the imaging unit 220 to acquire each image while changing the height of the stage, or the position of the height of the objective lens of the microscope 210 .
  • the controller 222 outputs the acquired images and the information about the focal position which is associated with the images to the image acquisition unit 110 .
  • the operation of the microscope system 200 will be described.
  • the sample is placed on the stage (not shown) resulting that the sample surface 215 is set.
  • the controller 222 controls the microscope 210 .
  • the controller 222 gradually changes the focal position of the optical system for the sample by, for example, gradually changing the position of the sample surface 215 in the optical axis direction. More specifically, for example, the controller 222 changes the height of the stage, the height of the objective lens, or the position of the focus lens of the microscope 210 .
  • the controller 222 causes the imaging unit 220 to sequentially acquire the microscopic image of the sample on each focal position.
  • the image acquisition unit 110 acquires a microscopic image of a sample at each focus position from the imaging unit 220 .
  • the image acquisition unit 110 also acquires, from the controller 222 , the focus position at the time of capture of each image.
  • the image acquisition unit 110 stores the acquired microscopic image in a storage unit 114 in association with the focus position.
  • the microscope system 200 creates a synthesized image, for example, a 3D reconstructed image or an all-in-focus image concerning the microscopic image.
  • An image synthesis unit 180 outputs the created synthesized image to, for example, a display unit to display it or a storage device to store it. According to the 3D reconstructed image or all-in-focus image, the user can easily recognize an object image having a depth larger than the depth of field, like a general microscopic image.
  • the illumination optical system 212 , the optical path control element 213 , the objective lens 214 , the observation optical system 218 , and the like function as a microscope optical system.
  • the imaging unit 220 functions as an imaging unit configured to acquire an image of a sample via the microscope optical system as a sample image.
  • the image enlargement ratio of the optical system of a microscope is higher than that of the optical system of a digital camera.
  • the band of the optical system of the microscope is sometimes not so higher than the sampling band of the image sensor of the camera upon micrography.
  • the band of the optical system can change depending on the numerical aperture, magnification, and the like of the optical system. For example, when the microscope has an optical zoom system, the band of the optical system changes as well.
  • the statistical information calculation unit 142 calculates a statistical information value in consideration of the frequency band of the image.
  • the parameter determination unit 144 calculates the correction parameter and the interpolation parameter based on the statistical information value.
  • the image processing system 100 is the image processing system according to the first embodiment.
  • the second embodiment or a modification thereof may be used.
  • FIG. 9 shows the outline of an example of the configuration of an image processing system 300 according to this embodiment.
  • the image processing system 300 comprises an image acquisition unit 310 , a band processing unit 320 , a band characteristics evaluation unit 330 , a Statistical information calculation unit 340 , a weighting factor calculation unit 350 , a contrast evaluation unit 360 , an in-focus evaluation unit 370 , a 3D shape estimation unit 380 and an image synthesis unit 390 .
  • the image acquisition unit 310 includes a storage unit 314 .
  • the image acquisition unit 310 acquires a plurality of images obtained by capturing a single object while changing the focus position and stores them in the storage unit 314 .
  • Each of the images is assumed to include information about the focus position of the optical system at the time of image acquisition, that is, information about the depth of the in-focus positions.
  • the image acquisition unit 310 outputs the images in response to requests from the band processing unit 320 and the image synthesis unit 390 .
  • the band processing unit 320 has a filter bank. That is, the band processing unit 320 includes, for example, a first filter 321 , a second filter 322 , and a third filter 323 .
  • the frequency characteristics of the first filter 321 , the second filter 322 , and the third filter 323 are, for example, as described above with reference to FIG. 2 .
  • the first filter 321 , the second filter 322 , and the third filter 323 may be bandpass filters having frequency characteristics as shown in FIG. 3 . Any other filters may be used as long as the plurality of filters are designed to pass different frequency bands.
  • the band processing unit 320 includes three filters. However, an arbitrary number of filters can be used.
  • the band processing unit 320 acquires the images from the image acquisition unit 310 , and performs filter processing for each region (for example, each pixel) of each of the plurality of images at different focus positions using the first filter 321 , the second filter 322 , and the third filter 323 .
  • the band processing unit 320 outputs the result of the filter processing to the band characteristics evaluation unit 330 .
  • the band characteristics evaluation unit 330 calculates a band evaluation value for each pixel of the plurality of images that have undergone the filter processing.
  • the band evaluation value is obtained by, for example, calculating the integrated value of the signals that have passed the filters.
  • the band evaluation value is thus obtained for each pixel and each frequency band in each image.
  • the band characteristics evaluation unit 330 outputs the calculated band evaluation value to the statistical information calculation unit 340 and the contrast evaluation unit 360 .
  • the statistical information calculation unit 340 calculates, for each frequency band, a statistical information value related to the average of the band evaluation values of the plurality of images at different focus positions. The statistical information will be described later.
  • the statistical information calculation unit 340 outputs the calculated statistical information value to the weighting factor calculation unit 350 .
  • the weighting factor calculation unit 350 calculates a value concerning weighting, that is, a weighting factor for each frequency band based on the statistical information value input from the statistical information calculation unit 340 .
  • the weighting factor will be described later.
  • the weighting factor calculation unit 350 outputs the calculated weighting factor to the contrast evaluation unit 360 .
  • the contrast evaluation unit 360 multiplies the band evaluation value input from the band characteristics evaluation unit 330 by the weighting factor of the corresponding band input from the weighting factor calculation unit 350 , thereby calculating a contrast evaluation value.
  • the contrast evaluation unit 360 outputs the calculated contrast evaluation value to the in-focus evaluation unit 370 .
  • the in-focus evaluation unit 370 evaluates the in-focus state of each region of each of the plurality of images at different focus positions.
  • the in-focus evaluation unit 370 selects an in-focus image for each region and estimates, based on the information of the focus position at the time of capture of the image, depth information corresponding to each region of the image.
  • the in-focus evaluation unit 370 outputs the depth information of each region of the image to the 3D shape estimation unit 380 .
  • the 3D shape estimation unit 380 optimizes depth information based on the depth information input from the in-focus evaluation unit 370 , and estimates the estimated value of the 3D shape of the object.
  • the 3D shape estimation unit 380 outputs the estimated 3D shape of the object to the image synthesis unit 390 .
  • the image synthesis unit 390 synthesizes a synthesized image by combining the plurality of images having different focal position based on the 3D shape of the object input from the 3D shape estimation unit 380 and the plurality of images acquired from the image acquisition unit 310 .
  • This synthesized image is, for example, a 3D reconstructed image or an all-in-focus image.
  • the image synthesis unit 390 outputs the created synthesized image to, for example, a display unit to display it, or outputs the synthesized image to, for example, a storage device to store it.
  • step S 401 the image acquisition unit 310 acquires a plurality of images obtained by capturing a single object while changing the focus position.
  • Each of the images is assumed to include information about the depth (for example, information about the focus position of the optical system at the time of acquiring the image).
  • the image acquisition unit 310 stores the acquired images in the storage unit 314 .
  • the band processing unit 320 performs filter processing for each area (for example, each pixel) of the plurality of images at different focus positions stored in the storage unit 314 using, for example, the first filter 321 , the second filter 322 , and the third filter 323 .
  • An arbitrary number of filters can be used. Hence, the following description will be made assuming that the band processing unit 320 includes N filters.
  • the band processing unit 320 outputs the result of the filter processing to the band characteristics evaluation unit 330 .
  • the band evaluation value Q(k, f n , i, j) is calculated as, for example, the integrated value of the signals that have passed the filters, which is an amount corresponding to the amplitude in each band the filter passes.
  • the band characteristics evaluation unit 330 outputs the band evaluation value Q(k, f n , i, j) to the statistical information calculation unit 340 .
  • step S 404 the statistical information calculation unit 340 calculates, for each frequency band, a statistical information value related to the average of the band evaluation values Q(k, f n , i, j) of the plurality of images at different focus positions.
  • the statistical information value various values calculated by various methods are usable, as will be described later.
  • the statistical information calculation unit 340 outputs the calculated statistical information value to the weighting factor calculation unit 350 .
  • step S 405 the weighting factor calculation unit 350 calculates a weighting factor corresponding to each band based on the statistical information value input from the statistical information calculation unit 340 .
  • the weighting factor as well, various values calculated by various methods are usable, as will be described later.
  • the weighting factor calculation unit 350 outputs the calculated weighting factor to the contrast evaluation unit 360 .
  • step S 406 the contrast evaluation unit 360 multiplies the band evaluation value Q(k, f n , i, j) input from the band characteristics evaluation unit 330 by the weighting factor of the corresponding frequency band out of the weighting factors input from the weighting factor calculation unit 350 , thereby calculating a contrast evaluation value.
  • the contrast evaluation unit 360 outputs the calculated contrast evaluation value to the in-focus evaluation unit 370 .
  • the in-focus evaluation unit 370 evaluates an in-focus state based on the contrast evaluation value acquired from the contrast evaluation unit 360 .
  • the in-focus evaluation unit 370 specifies, for each of the plurality of images at different focus positions, a region where the contrast evaluation value is higher than a predetermined threshold as an in-focus region.
  • the in-focus evaluation unit 370 estimates depth information for a point corresponding to the region from the in-focus region out of the plurality of images at different focus positions and information about the focus position at the time of acquiring the image including the region.
  • the depth information is, for example, a value representing the position of the region in the depth direction.
  • the in-focus evaluation unit 370 outputs the depth information of each region to the 3D shape estimation unit 380 .
  • the 3D shape estimation unit 380 optimizes depth information such as smoothing based on the depth information input from the in-focus evaluation unit 370 , and estimates the 3D shape of the object.
  • the 3D shape estimation unit 380 outputs the estimated 3D shape of the sample to the image synthesis unit 390 .
  • the image synthesis unit 390 synthesizes a synthesized image by combining the plurality of images having different focal position based on the 3D shape of the object input from the 3D shape estimation unit 380 . If the synthesized image is, for example, a 3D reconstructed image, the synthesized image is created by synthesizing the 3D shape with the in-focus images concerning the respective portions of the 3D shape. If the synthesized image is, for example, an all-in-focus image, images extracted from images having focal position corresponding to the depth of the respective pixels are combined, thereby synthesizing an image that is in focus for all pixels. The image synthesis unit 390 outputs the created synthesized image to a display unit or a storage device.
  • the image acquisition unit 310 functions as an image acquisition unit configured to acquire a plurality of images obtained by capturing a single object at different focus positions.
  • the band characteristics evaluation unit 330 functions as a band characteristics evaluation unit configured to calculate, for each pixel of images, the band evaluation value of a band included in the image for each of a plurality of frequency bands.
  • the statistical information calculation unit 340 functions as a statistical information calculation unit configured to calculate, for at least each of the plurality of the frequency bands, statistical information using the band evaluation values of at least two focus positions.
  • the weighting factor calculation unit 350 functions as a weighting factor calculation unit configured to calculate, for at least each of the plurality of the frequency bands, a weighting factors corresponding to the band evaluation values based on the statistical information.
  • the contrast evaluation unit 360 functions as a contrast evaluation unit configured to calculate a contrast evaluation values for each region including the at least one pixel in the plurality of images based on the band evaluation values and the weighting factors.
  • the in-focus evaluation unit 370 functions as an in-focus evaluation unit configured to select an in-focus region out of the regions of the plurality of images based on the contrast evaluation values.
  • the image synthesis unit 390 functions as an all-in-focus image creation unit or a 3D reconstructed image creation unit.
  • the band characteristics evaluation unit 330 performs filter processing.
  • the contrast evaluation unit 360 calculates a contrast evaluation value based on a band evaluation value obtained as the result of the filter processing.
  • a contrast evaluation value representing more accurate contrast evaluation is obtained using a filter having a high spectrum for a high frequency.
  • the statistical information calculation unit 340 calculates a statistical information value in consideration of the frequency band of the image.
  • the weighting factor calculation unit 350 calculates a weighting factor based on the statistical information value.
  • the weighting factor is determined in consideration of the frequency band of the image.
  • the contrast evaluation unit 360 determines the contrast evaluation value based on the band evaluation value calculated by the band characteristics evaluation unit 330 and the weighting factor calculated by the weighting factor calculation unit 350 . It is therefore possible to determine a more accurate contrast evaluation value as compared to a case in which the frequency band of the image is not taken into consideration. This allows the image processing system 300 to accurately create the 3D reconstructed image or all-in-focus image.
  • the image processing system 300 is particularly effective when used for a microscopic image captured by a microscope having a shallow depth of field.
  • step S 404 Detailed examples of the statistical information value calculated in step S 404 , the weighting factor calculated in step S 405 , and the contrast evaluation value calculated in step S 406 will be described next.
  • the weighting factor is a value obtained by dividing the average for each of the regions and frequency bands by the sum of the averages for all frequency bands. That is, a weighting factor L N (f m , i, j) is calculated by
  • the contrast evaluation value D(k, i, j) is the sum of the products of the band evaluation value Q(k, f n , i, j) and the weighting factor L N (f m , i, j) of the respective frequency bands.
  • step S 407 the in-focus evaluation unit 370 selects, for example, k that makes the contrast evaluation value D(k, i, j) highest for each region (i, j) and estimates the depth information.
  • the weighting factor L N (f m , i, j) when the weighting factor L N (f m , i, j) is calculated, the average is divided by the sum of the averages for all frequency bands, as indicated by equation (23). However, the weighting factor L N (f m , i, j) may be obtained by dividing the average by the sum of the averages not for all frequency bands but for some frequency bands.
  • the statistical information value calculated in step S 404 is the average of the band evaluation values Q(k, f n , i, j) at all focus positions for each of the regions and frequency bands, as in the first example. That is, the average L(f n , i, j) is calculated by equation (22). In this example, the average L(f n , i, j) is used as the weighting factor.
  • the contrast evaluation value D(k, i, j) is calculated by
  • step S 407 the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • the statistical information value calculated in step S 404 is the average of the band evaluation values Q(k, f n , i, j) at all focus positions for each of the regions and frequency bands, as in the first example. That is, the average L(f n , i, j) is calculated by equation (22). In this example, a relative value of the average L(f n , i, j) to a predetermined frequency band f 0 is used as the weighting factor. That is, the weighting factor L N (f m , i, j) is calculated by
  • the contrast evaluation value D(k, i, j) is calculated by equation (24) using the weighting factor L N (f m , i, j).
  • step S 407 the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • the statistical information value calculated in step S 404 is the average of the band evaluation values at all focus positions for each of the regions and frequency bands, as in the first example. That is, the average L(f n , i, j) is calculated by equation (22).
  • the weighting factor is set to 1 or 0 depending on whether to meet a predetermined condition. That is, whether to use the band evaluation value Q(k, f n , i, j) is determined in accordance with whether the condition is met.
  • the weighting factor is calculated by
  • the contrast evaluation value D(k, i, j) is calculated by equation (24) as in the first example.
  • step S 407 the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • the judgment when determining the weighting factor may be done using not a value obtained by dividing the average L(f n , i, j) by the sum of the averages for all frequency bands as in equation (27) but the average L(f n , i, j) itself or a value obtained by dividing the average L(f n , i, j) by the sum of the averages for arbitrary frequency bands or the averages for arbitrary frequency bands.
  • the weighting factor L N (f m , i, j) is calculated for each region, it is particularly effective when the band characteristic is not constant among the regions of an image.
  • the average L(f n , i, j) as a statistical information value is calculated for each frequency band.
  • the average L(f n , i, j) is small, information necessary for evaluating the contrast is not included in the band evaluation value Q(k, f n , i, j), or the band evaluation value Q(k, f n , i, j) includes noise.
  • the weight for the band evaluation value Q(k, f n , i, j) that does not include the necessary information or includes noise is made small. It is therefore possible to prevent the band evaluation value Q(k, f n , i, j) that, for example, does not include the necessary information from affecting the contrast evaluation value. As a result, the accurate band evaluation value Q(k, f n , i, j) is generated, and accurate depth information estimation is implemented based on the band evaluation value Q(k, f n , i, j).
  • a variance is used as an example of the variation. That is, a variance S(f n , i, j) is calculated, using, for example, the average L(f n , i, j) calculated by equation (22), by
  • the weighting factor is a value obtained by dividing the variance for each of the regions and frequency bands by the sum of the variances for all frequency bands. That is, a weighting factor S N (f m , i, j) is calculated by
  • the contrast evaluation value D(k, i, j) is calculated by
  • step S 407 the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j). Note that in this example, when calculating the weighting factor S N (f m , i, j), the average is divided by the sum of the averages for all frequency bands, as indicated by equation (29). However, the weighting factor S N (f m , i, j) may be obtained by dividing the average by the sum of the averages not for all frequency bands but for some frequency bands.
  • the statistical information value calculated in step S 404 is the average of the variances of the band evaluation values Q(k, f n , i, j) at all focus positions for each of the regions and frequency bands, as in the fifth example. That is, the variance S(f n , i, j) is calculated by equation (28). In this example, the variance S(f n , i, j) is used as the weighting factor.
  • the contrast evaluation value D(k, i, j) is calculated by
  • step S 407 the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • the statistical information value calculated in step S 404 is the variance of the band evaluation values Q(k, f n , i, j) at all focus positions for each of the regions and frequency bands, as in the fifth example. That is, the variance S(f n , i, j) is calculated by equation (28). In this example, a relative value of the variance S(f n , i, j) to the predetermined frequency band f 0 is used as the weighting factor. That is, the weighting factor S N (f m , i, j) is calculated by
  • the contrast evaluation value D(k, i, j) is calculated by equation (30) using the weighting factor S N (f m , i, j).
  • step S 407 the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • the weighting factor S N (f m , i, j) is calculated for each region, it is particularly effective when the band characteristic is not constant among the regions of an image.
  • the variance S(f n , i, j) as a statistical information value is calculated for each frequency band.
  • the variance S(f n , i, j) that is, the variation is small, information necessary for evaluating the contrast is not included in the band evaluation value Q(k, f n , i, j), or the band evaluation value Q(k, f n , i, j) includes noise, and the variation becomes relatively small.
  • the weight for the band evaluation value Q(k, f n , i, j) that does not include the necessary information or includes noise becomes small. It is therefore possible to prevent the band evaluation value Q(k, f n , i, j) that, for example, does not include the necessary information from affecting the contrast evaluation value. As a result, the accurate band evaluation value Q(k, f n , i, j) is generated, and accurate depth information estimation is implemented based on the band evaluation value Q(k, f n , i, j).
  • the weighting factor may be set to 1 or 0 depending on whether to meet a predetermined condition, as in the fourth example. That is, whether to use the band evaluation value Q(k, f n , i, j) is determined in accordance with whether the condition is met. In this case as well, the same effect as in the fifth to seventh examples can be obtained.
  • the average L(f n , i, j) is determined for each region as the statistical information value.
  • the statistical information value is the average in a whole image A for each band. That is, the average L(f n ) is calculated by
  • the weighting factor is a value obtained by dividing the average L(f n ) by the sum of the averages L(f n ) for all frequency bands. That is, a weighting factor L N (f m ) is calculated by
  • the contrast evaluation value D(k, i, j) is calculated by
  • step S 407 the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • the average may be divided by the sum of the averages not for all frequency bands as indicated by equation (34) but for some frequency bands or the average for specific frequency bands.
  • L(f m ) may be used in place of L N (f m ).
  • L N (f m ) may be 1 or 0.
  • the average of the variances in the whole region A of the image may be used.
  • the band processing unit 320 executes wavelet transformation instead of having a filter bank.
  • filter processing having a specific directivity is performed for an original image as shown on the left side of FIG. 11 , thereby acquiring images A, B, and C after band separation, as shown on the right side of FIG. 11 .
  • the filter processing having the specific directivity is performed again for an image obtained by reducing the filter residual image, thereby acquiring images D, E, and F.
  • Such processing is repeated to acquire images G, H, and I and images J, K, L, and M.
  • image data represented by multi-resolution is created, as shown on the right side of FIG. 11 .
  • an amount corresponding to the gain of a specific band is associated with each region of the image, as in the fourth embodiment.
  • FIG. 12 is a flowchart illustrating an example of processing of the image processing system 300 according to this modification.
  • the image acquisition unit 310 acquires a plurality of images obtained by capturing a single object while changing the focus position and stores the images in the storage unit 314 .
  • the band processing unit 320 performs wavelet transformation for the plurality of images at different focus positions stored in the storage unit 314 .
  • the band processing unit 320 outputs the transformation result to the band characteristics evaluation unit 330 .
  • the band characteristics evaluation unit 330 calculates an evaluation value for each region (p, q) of the plurality of images that have undergone the wavelet transformation.
  • the coefficient of the number n of stages of the wavelet transformation is set as a band evaluation value Q(k, n, p, q) for each region (p, q), that is, for each data I(k, p, q).
  • the band characteristics evaluation unit 330 outputs the band evaluation value Q(k, n, p, q) to the statistical information calculation unit 340 .
  • step S 504 the statistical information calculation unit 340 calculates a statistical information value.
  • the statistical information calculation unit 340 outputs the calculated statistical information value to the weighting factor calculation unit 350 .
  • step S 505 the weighting factor calculation unit 350 calculates a weighting factor corresponding to each band based on the statistical information value L(f n ) input from the statistical information calculation unit 340 .
  • the weighting factor L N (f m ) is calculated by
  • the weighting factor calculation unit 350 outputs the calculated weighting factor L N (f m ) to the contrast evaluation unit 360 .
  • step S 506 the contrast evaluation unit 360 multiplies the band evaluation value Q(k, n, p, q) input from the band characteristics evaluation unit 330 by the weighting factor L N (f m ) of the corresponding frequency band input from the weighting factor calculation unit 350 , and performs inverse transformation, thereby calculating the contrast evaluation value D(k, i, j) for each region (i, j) of the images before the wavelet transformation.
  • the contrast evaluation unit 360 outputs the calculated contrast evaluation value D(k, i, j) to the in-focus evaluation unit 370 .
  • step S 507 the in-focus evaluation unit 370 evaluates an in-focus state based on the contrast evaluation value D(k, i, j), as in the fourth embodiment, and outputs the depth information of each pixel to the 3D shape estimation unit 380 .
  • step S 508 the 3D shape estimation unit 380 performs optimization such as smoothing of the depth information based on the depth information, estimates the 3D shape of the object, and outputs the estimated 3D shape of the object to the image synthesis unit 390 .
  • step S 509 the image synthesis unit 390 synthesizes the plurality of images at different focus positions based on the 3D shape of the object and the plurality of images, thereby creating a synthesized image.
  • the fourth embodiment shows a microscope system 400 comprising the image processing system 300 according to the fourth embodiment.
  • FIG. 13 shows the outline of an example of the configuration of the microscope system 400 according to this embodiment.
  • the microscope system 400 includes a microscope 210 and the image processing system 400 according to the fourth embodiment.
  • the microscope 210 is, for example, a digital microscope.
  • the microscope 210 includes an LED light source 211 , an illumination optical system 212 , an optical path control element 213 , an objective lens 214 , a sample surface 215 placed on a stage (not shown), an observation optical system 218 , an imaging plane 219 , an imaging unit 220 , and a controller 222 .
  • the observation optical system 218 includes a zoom optical system 216 and an imaging optical system 217 .
  • the objective lens 214 , the optical path control element 213 , the zoom optical system 216 , and the imaging optical system 217 are arranged in this order on the observation optical path from the sample surface 215 to the imaging plane 219 .
  • Illumination light emitted by the LED light source 211 enters the optical path control element 213 via the illumination optical system 212 .
  • the optical path control element 213 reflects the illumination light toward the objective lens 214 on the observation optical path.
  • the illumination light irradiates a sample placed on the sample surface 215 via the objective lens 214 .
  • the sample When irradiated with the illumination light, the sample generates observation light.
  • the observation light is reflected light, fluorescence, or the like.
  • the observation light enters the optical path control element 213 .
  • the optical path control element 213 passes the observation light and makes it enter the observation optical system 218 including the zoom optical system 216 and the imaging optical system 217 .
  • the optical path control element 213 is an optical element that reflects or passes incident light in accordance with its characteristic.
  • a polarizer such as a wire grid or a polarizing beam splitter (PBS), which reflects or passes incident light in accordance with its polarization direction is used.
  • PBS polarizing beam splitter
  • a dichroic mirror that reflects or passes incident light in accordance with its frequency may be used.
  • the observation optical system 218 condenses the observation light on the imaging plane 219 , and forms an image of the sample on the imaging plane 219 .
  • the imaging unit 220 generates an image signal based on the image formed on the imaging plane 219 , and outputs the image signal as a microscopic image to the image acquisition unit 310 .
  • the controller 222 controls the operations of the microscope 210 .
  • the microscope 210 acquires a plurality of microscopic images of a single sample captured on different focal planes.
  • the controller 222 causes the imaging unit 220 to acquire the image of the sample on each focal plane while controlling the optical system of the microscope 210 to gradually change the focal plane.
  • the controller 222 causes the imaging unit 220 to acquire each image while changing the height of the stage, or the position of the height of the objective lens of the microscope 210 .
  • the controller 222 outputs the acquired images and the information about the focal position which is associated with the images to the image acquisition unit 310 .
  • the operation of the microscope system 400 will be described.
  • the sample is placed on the stage (not shown) resulting that the sample surface 215 is set.
  • the controller 222 controls the microscope 210 .
  • the controller 222 gradually changes the focal position of the optical system for the sample by, for example, gradually changing the position of the sample surface 215 in the optical axis direction. More specifically, for example, the controller 222 changes the height of the stage, the height of the objective lens, or the position of the focus lens of the microscope 210 .
  • the controller 222 causes the imaging unit 220 to sequentially acquire the microscopic image of the sample on each focal position.
  • the image acquisition unit 310 acquires a microscopic image of a sample at each focus position from the imaging unit 220 .
  • the image acquisition unit 310 also acquires, from the controller 222 , the focus position at the time of capture of each image.
  • the image acquisition unit 310 stores the acquired microscopic image in a storage unit 314 in association with the focus position.
  • the microscope system 400 creates a synthesized image, for example, a 3D reconstructed image or an all-in-focus image concerning the microscopic image.
  • An image synthesis unit 390 outputs the created synthesized image to, for example, a display unit to display it or a storage device to store it. According to the 3D reconstructed image or all-in-focus image, the user can easily recognize an object image having a depth larger than the depth of field, like a general microscopic image.
  • the illumination optical system 212 , the optical path control element 213 , the objective lens 214 , the observation optical system 218 , and the like function as a microscope optical system.
  • the imaging unit 220 functions as an imaging unit configured to acquire an image of a sample via the microscope optical system as a sample image.
  • the image enlargement ratio of the optical system of a microscope is higher than that of the optical system of a digital camera.
  • the band of the optical system of the microscope is sometimes not so higher than the sampling band of the image sensor of the camera upon micrography.
  • the band of the optical system can change depending on the numerical aperture, magnification, and the like of the optical system. For example, when the microscope has an optical zoom system, the band of the optical system changes as well.
  • the statistical information calculation unit 340 calculates a statistical information value in consideration of the frequency band of the image.
  • the weighting factor calculation unit 350 calculates the weighting factor based on the statistical information value.
  • the contrast evaluation unit 360 determines the contrast evaluation value based on the evaluation value calculated by the band characteristics evaluation unit 330 and the weighting factor calculated in consideration of the frequency band of the image, an accurate contrast evaluation value can be determined.
  • This allows the microscope system 400 to accurately create the 3D reconstructed microscopic image or all-in-focus microscopic image. If the optical system of the microscope 310 includes an optical zoom system, the numerical apertures changes depending on the focal length of the optical zoom system, and the band of the microscopic image accordingly changes. For this reason, the embodiment is particularly effective.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

An image processing system includes an image acquisition unit, a candidate value estimation unit, a band characteristics evaluation unit, an effective frequency determination unit and a candidate value modification unit. The acquisition unit acquires images. The estimation unit estimates, for each pixel of the images, a candidate value of a 3D shape. The evaluation unit calculates, for each pixel, a band evaluation value of a band included in the images. The determination unit determines an effective frequency of the pixel based on statistical information of the band evaluation value. The modification unit performs data correction or data interpolation for the candidate value based on the effective frequency and calculates a modified candidate value representing the 3D shape.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is based upon and claims the benefit of priority from prior Japanese Patent Applications No. 2012-073354, filed Mar. 28, 2012; and No. 2012-075081, filed Mar. 28, 2012, the entire contents of all of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to an image processing system and a microscope system including the same.
  • 2. Description of the Related Art
  • In general, a method of evaluating an in-focus state based on the band contrast of an image is known. In-focus evaluation based on a contrast is used not only for an autofocus function but also, for example, to acquire the depth information of an object. The depth information is acquired by, for example, capturing an object at a plurality of focus positions and then selecting an in-focus image from the plurality of images for each position. In addition, the depth information is used when capturing an object at a plurality of focus positions, selecting an in-focus image from the plurality of images for each position of the object, and synthesizing the in-focus images to create an all-in-focus image or a 3D reconstructed image.
  • When creating an all-in-focus image or a 3D reconstructed image, a best-in-focus image is selected from a plurality of images having different focal planes for each position in an image, and the 3D shape of the sample is estimated. After that, optimization processing needs to be performed for the estimated value of the 3D shape. This optimization processing can include reducing estimation errors of isolated points based on the correlation between pixels. The optimization processing can also include estimating the sample shape for a position where the above-described selection cannot be done.
  • For example, Jpn. Pat. Appln. KOKAI Publication No. 9-298682 discloses a technique concerning a microscope system for creasing an all-in-focus image. Jpn. Pat. Appln. KOKAI Publication No. 9-298682 discloses performing processing using a recovery filter after all-in-focus image creation. The frequency band of an image generally changes depending on the optical system, magnification, the characteristics of the object, and the like used to acquire the image. In the technique disclosed in Jpn. Pat. Appln. KOKAI Publication No. 9-298682, the coefficient of the recovery filter is determined in accordance with the settings of the optical system, including the magnification and the numerical aperture of the objective lens, in consideration of the change in the band of the optical system.
  • For example, Jpn. Pat. Appln. KOKAI Publication No. 2010-166247 discloses a technique of judging an in-focus state based on a contrast and creating an all-in-focus image based on an in-focus image. Jpn. Pat. Appln. KOKAI Publication No. 2010-166247 also discloses a technique concerning controlling the characteristics of a filter configured to restrict a high frequency so as to obtain a predetermined contrast even in an out-of-focus region.
  • BRIEF SUMMARY OF THE INVENTION
  • According to an aspect of the present invention, an image processing system includes an image acquisition unit configured to acquire a plurality of images obtained by capturing a single object at different focus positions; a candidate value estimation unit configured to estimate, for each pixel of the images, a candidate value of a 3D shape based on the plurality of images; a band characteristics evaluation unit configured to calculate, for each pixel of the images, a band evaluation value of a band included in the images for each of a plurality of frequency bands; an effective frequency determination unit configured to determine an effective frequency of the pixel based on statistical information of the band evaluation value; and a candidate value modification unit configured to perform at least one of data correction and data interpolation for the candidate value based on the effective frequency and calculate a modified candidate value representing the 3D shape of the object.
  • According to an aspect of the present invention, a microscope system includes a microscope optical system; an imaging unit configured to acquire an image of a sample via the microscope optical system as a sample image; and the above described image processing system which is configured to acquire the sample image as the image.
  • Advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out hereinafter.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention, and together with the general description given above and the detailed description of the embodiments given below, serve to explain the principles of the invention.
  • FIG. 1 is a block diagram showing an example of a configuration of an image processing system according to first and second embodiments;
  • FIG. 2 is a view showing an example of a frequency characteristic of a filter bank of a band processing unit according to the first and second embodiments;
  • FIG. 3 is a view showing another example of a frequency characteristic of a filter bank of band processing unit according to the first and second embodiments;
  • FIG. 4 is a flowchart showing an example of processing of the image processing system according to the first embodiment;
  • FIG. 5 is a flowchart showing an example of noise/isolated point removal processing according to the first embodiment;
  • FIG. 6A is a view showing an example of an original signal corresponding to a shape candidate value so as to explain coring processing;
  • FIG. 6B is a view showing an example of a moving average and a threshold so as to explain coring processing;
  • FIG. 6C is a view showing an example of a result of coring processing so as to explain coring processing;
  • FIG. 7 is a flowchart showing an example of interpolation processing according to the first embodiment;
  • FIG. 8 is a block diagram showing an example of a configuration of a microscope system according to a third embodiment;
  • FIG. 9 is a block diagram showing an example of a configuration of an image processing system according to a fourth embodiment;
  • FIG. 10 is a flowchart showing an example of processing of the image processing system according to the fourth embodiment;
  • FIG. 11 is a view to explain wavelet transformation;
  • FIG. 12 is a flowchart showing an example of processing of an image processing system according to a modification of the fourth embodiment; and
  • FIG. 13 is a block diagram showing an example of a configuration of a microscope system according to a fifth embodiment.
  • DETAILED DESCRIPTION OF THE INVENTION First Embodiment
  • The first embodiment of the present invention will be described with reference to the accompanying drawing. FIG. 1 shows the outline of an example of the configuration of an image processing system 100 according to this embodiment. As shown in FIG. 1, the image processing system 100 comprises an image acquisition unit 110, a band processing unit 120, a band characteristics evaluation unit 130, an effective frequency determination unit 140, a candidate value estimation unit 150, a data modification unit 160, a 3D shape estimation unit 170 and an image synthesis unit 180. The effective frequency determination unit 140 includes a statistical information calculation unit 142 and a parameter determination unit 144. The candidate value estimation unit 150 includes a contrast evaluation unit 152 and a shape candidate estimation unit 154. The data modification unit 160 includes a data correction unit 162 and a data interpolation unit 164.
  • The image acquisition unit 110 includes a storage unit 114. The image acquisition unit 110 acquires a plurality of images obtained by capturing a single object while changing the focus position and stores them in the storage unit 114. Each of the images is assumed to include information about the focus position of the optical system, that is, information about the depth at the time of image acquisition. The image acquisition unit 110 outputs the images in response to requests from the band processing unit 120, the shape candidate estimation unit 154, and the image synthesis unit 180.
  • The band processing unit 120 has a filter bank. That is, the band processing unit 120 includes, for example, a first filter 121, a second filter 122, and a third filter 123. FIG. 2 shows the frequency characteristics of the first filter 121, the second filter 122, and the third filter 123. As shown in FIG. 2, these filters are low-pass filters, and the cutoff frequency becomes high in the order of the first filter 121, the second filter 122, and the third filter 123. That is, the filters pass different signal frequency bands. Note that the first filter 121, the second filter 122, and the third filter 123 may be bandpass filters having frequency characteristics as shown in FIG. 3. Any other filters may be used as long as the plurality of filters are designed to pass different frequency bands. In this embodiment, the band processing unit 120 includes three filters. However, an arbitrary number of filters can be used. The band processing unit 120 acquires the images from the image acquisition unit 110, and performs filter processing for each region (for example, each pixel) of each of the plurality of images at different focus positions using the first filter 121, the second filter 122, and the third filter 123. The following description will be made assuming that the processing is performed for each pixel. However, the processing may be performed for each region including a plurality of pixels. The band processing unit 120 outputs the result of the filter processing to the band characteristics evaluation unit 130.
  • The band characteristics evaluation unit 130 calculates a band evaluation value for each pixel of the plurality of images that have undergone the filter processing. The band evaluation value is obtained by, for example, calculating the integrated value of the signals that have passed the filters. The band evaluation value is thus obtained for each pixel and each frequency band in each image. The band characteristics evaluation unit 130 outputs the calculated band evaluation value to the statistical information calculation unit 142 in the effective frequency determination unit 140 and the contrast evaluation unit 152 in the candidate value estimation unit 150.
  • The statistical information calculation unit 142 in the effective frequency determination unit 140 calculates, for each frequency band, a statistical information value having a relationship to the average of the band evaluation values of the plurality of images at different focus positions. The statistical information will be described later. The statistical information calculation unit 142 outputs the calculated statistical information value to the parameter determination unit 144.
  • The parameter determination unit 144 in the effective frequency determination unit 140 calculates an effective frequency based on the statistical information value input from the statistical information calculation unit 142. The parameter determination unit 144 also calculates, based on the effective frequency, a correction parameter used by the data correction unit 162 in the data modification unit 160 and an interpolation parameter used by the data interpolation unit 164 in the data modification unit 160. Calculation of the correction parameter and the interpolation parameter will be described later. The parameter determination unit 144 outputs the calculated correction parameter to the data correction unit 162 in the data modification unit 160 and the interpolation parameter to the data interpolation unit 164 in the data modification unit 160. Note that the frequency determination can be done using a filter bank as in this embodiment or data based on frequency analysis by orthogonal basis such as wavelet transformation.
  • The contrast evaluation unit 152 in the candidate value estimation unit 150 evaluates the strength of a high-frequency component for each pixel of the plurality of images based on the band evaluation value input from the band characteristics evaluation unit 130 and calculates a contrast evaluation value. To calculate the contrast evaluation value, the contrast evaluation unit 152 can use one of the plurality of band evaluation values calculated by the band characteristics evaluation unit 130 or the plurality of band evaluation values. The contrast evaluation unit 152 outputs the calculated contrast evaluation value for each pixel of each image to the shape candidate estimation unit 154.
  • The shape candidate estimation unit 154 provided in the candidate value estimation unit 150 evaluates the in-focus state of each pixel of each of the plurality of images based on the contrast evaluation value input from the contrast evaluation unit 152. The shape candidate estimation unit 154 selects the best-in-focus image out of the plurality of images having different focal position for each pixel of the image. The shape candidate estimation unit 154 acquires, from the image acquisition unit 110, the information of the focal position when the best-in-focus image has been captured, estimates the depth of the sample corresponding to each pixel of the image based on the information, and calculates a shape candidate value that is information as the estimation value of the 3D shape of the object. For a pixel for which the depth of the object could not be estimated based on the contrast evaluation value, the shape candidate estimation unit 154 sets a value representing inestimability as the shape candidate value corresponding to the pixel. The shape candidate estimation unit 154 outputs each calculated shape candidate value to the data correction unit 162 in the data modification unit 160.
  • The data correction unit 162 provided in the data modification unit 160 performs noise coring for the shape candidate values input from the shape candidate estimation unit 154 to remove noise of the shape candidate values. When performing the coring processing, the data correction unit 162 uses the correction parameters input from the parameter determination unit 144, as will be described later in detail. The data correction unit 162 outputs, to the data interpolation unit 164, noise-removed shape candidate values that are shape candidate values having undergone noise removal.
  • The data interpolation unit 164 provided in the data modification unit 160 interpolates data for each pixel having a value representing inestimability out of the noise-removed shape candidate values input from the data correction unit 162. When interpolating data, the data interpolation unit 164 uses the interpolation parameters input from the parameter determination unit 144, as will be described later in detail. The data interpolation unit 164 outputs, to the 3D shape estimation unit 170, interpolated shape candidate values that are shape candidate values having undergone noise removal and interpolation of the values of the inestimable pixels.
  • The 3D shape estimation unit 170 optimizes depth information based on the interpolated shape candidate values input from the data interpolation unit 164, and determines the estimated value of the 3D shape of the object. The 3D shape estimation unit 170 outputs the determined 3D shape of the object to the image synthesis unit 180. The image synthesis unit 180 synthesizes a synthesized image by combining the plurality of images having different focal position based on the 3D shape of the object input from the 3D shape estimation unit 170 and the plurality of images acquired from the image acquisition unit 110. This synthesized image is, for example, a 3D reconstructed image or an all-in-focus image. The image synthesis unit 180 outputs the created synthesized image to, for example, a display unit to display it, or outputs the synthesized image to, for example, a storage device to store it.
  • An example of the operation of the image processing system 100 according to this embodiment will be described with reference to the flowchart of FIG. 4. In step S101, the image acquisition unit 110 acquires a plurality of images obtained by capturing a single object while changing the focus position. Each of the images is assumed to include information about the depth such as information about the focus position of the optical system at the time of acquiring the image. The image acquisition unit 110 stores the acquired images in the storage unit 114.
  • In step S102, the band processing unit 120 performs filter processing for each pixel of the plurality of images at different focus positions stored in the storage unit 114 using, for example, the first filter 121, the second filter 122, and the third filter 123. An arbitrary number of filters can be used. Hence, the following description will be made assuming that the band processing unit 120 includes N filters. The band processing unit 120 outputs the result of the filter processing to the band characteristics evaluation unit 130.
  • In step S103, the band characteristics evaluation unit 130 calculates, for each band, a band evaluation value for each region of the plurality of images that have undergone the filter processing. That is, the band characteristics evaluation unit 130 calculates, for each frequency band fn (n=1, 2, . . . , N), a band evaluation value Q(h, fn, i, j) for each focus position h (h=1, 2, . . . , H) and each pixel (i, j) (each pixel (i, j) included in a whole region A of the image), that is, for each data I(h, i, j). The band evaluation value Q(h, fn, i, j) is calculated as, for example, the integrated value of the signals that have passed the filters, which is an amount corresponding to the amplitude in each band the filter passes. The band characteristics evaluation unit 130 outputs the band evaluation value Q(h, fn, i, j) to the statistical information calculation unit 142 and the contrast evaluation unit 152.
  • In step S104, the statistical information calculation unit 142 calculates, for each frequency band, a statistical information value representing the average of the band evaluation values Q(h, fn, j) of the plurality of images at different focus positions. The statistical information value is represented by, for example, the average L(fn, i, j) given by
  • L ( f n , i , j ) = 1 H h = 1 H Q ( h , f n , i , j ) . ( 1 )
  • The statistical information calculation unit 142 outputs the calculated statistical information value to the parameter determination unit 144.
  • In step S105, the parameter determination unit 144 determines an effective frequency fν. The effective frequency fν is determined in, for example, the following way. A variable LN(fn, i, j) is set to 1 or 0 depending on whether the value concerning the average L(fn, i, j) meets a predetermined condition. That is, the variable LN(fn, i, j) is given by, for example,
  • L N ( f m , i , j ) = { 1 if L ( f m , i , j ) f n = f 1 f N L ( f n , i , j ) > Thr 0 else , ( 2 )
  • where a threshold Thr is an arbitrary design value such as 0.2 when N=5. The effective frequency fν is determined using the variable LN(fn, i, j). For example, counting is performed from the low frequency side, that is, n is sequentially increased from the low frequency side, and fm-1 relative to a minimum frequency fm meeting
  • m = 1 n L N ( f m , i , j ) < n ( 3 )
  • is determined as the effective frequency fν.
    That is, a maximum frequency meeting
  • L ( f m , i , j ) f n = f 1 f N L ( f n , i , j ) > Thr ( 4 )
  • is determined as the effective frequency fν. Note that processing using expressions (2) and (3) need not always be performed, and a maximum frequency more than the threshold Thr may simply be determined as the effective frequency fν.
  • In step S106, the parameter determination unit 144 determines correction parameters m, n, and w(k, l) to be used by the data correction unit 162, and interpolation parameters σk and σl to be used by the data interpolation unit 164 based on the effective frequency fν. The parameter determination unit 144 stores, for example, a lookup table representing the relationship between the effective frequency fν and correction parameters m, n, and w(k, l). The parameter determination unit 144 determines correction parameters m, n, and w(k, l) based on the effective frequency fν by looking up the lookup table. The lower the effective frequency fν is, the larger the values of correction parameters m and n are. As correction parameter w(k, l), a function that does not decrease the weight when the values m and n are large is given in equation (7), to be described later. The parameter determination unit 144 outputs the determined correction parameters m, n, and w(k, l) to the data correction unit 162.
  • The dimension of the distance and the dimension of the frequency (for example, the number of cycles per unit distance) hold a reciprocal relationship. Hence, the parameter determination unit 144 may obtain correction parameters m and n by
  • m = int ( C 1 fv ) n = int ( C 2 f v ) , ( 5 )
  • where int is integerization processing, and C1 and C2 are arbitrary coefficients. Alternatively, a function generally having a negative correlation may be used.
  • The parameter determination unit 144 also determines interpolation parameters σk and σl to be used by the data interpolation unit 164 based on the effective frequency fν. The parameter determination unit 144 stores, for example, a lookup table representing the relationship between the effective frequency fν and the interpolation parameters σk and al. The parameter determination unit 144 determines the interpolation parameters σk and σl based on the effective frequency fν by looking up the lookup table. The lower the effective frequency fν is, the larger the values of the interpolation parameters σk and σl are. The parameter determination unit 144 outputs the determined interpolation parameters σk and σl to the data interpolation unit 164.
  • As will be described later, the interpolation parameters σk and σl represent the variance. The variance has the dimension of the distance. Hence, like correction parameters m and n, the parameter determination unit 144 may obtain the interpolation parameters σk and σl by
  • σ k = int ( C 3 fv ) σ l = int ( C 4 fv ) , ( 6 )
  • where int is integerization processing, and C3 and C4 are arbitrary coefficients.
  • In step S107, the contrast evaluation unit 152 acquires the band evaluation value Q(h, fn, i, j) from the band characteristics evaluation unit 130, evaluates the strength of a high-frequency component for each pixel of the plurality of images, and calculates a contrast evaluation value. The contrast evaluation unit 152 outputs the calculated contrast evaluation value for each pixel of each image to the shape candidate estimation unit 154.
  • In step S108, the shape candidate estimation unit 154 evaluates the in-focus state of each pixel of the plurality of images based on the contrast evaluation value input from the contrast evaluation unit 152. For example, the higher the contrast is, the higher the shape candidate estimation unit 154 evaluates the degree of in-focus. The shape candidate estimation unit 154 also selects a best-in-focus image from the plurality of images having different focal planes for each pixel of an image. The shape candidate estimation unit 154 acquires, from the image acquisition unit 110, information of the focus position at the time of capture of the best-in-focus image. The shape candidate estimation unit 154 estimates the depth of the object corresponding to each pixel of the image based on the information acquired from the image acquisition unit 110, and calculates a shape candidate value P(i, j) that is information about the shape of the object. The shape candidate value P(i, j) represents the depth of the object at, for example, coordinates (i, j). If the depth of the object could not be estimated based on the contrast evaluation value, the shape candidate estimation unit 154 sets a value representing inestimability as the shape candidate value P(i, j) corresponding to the pixel. The shape candidate estimation unit 154 outputs the calculated shape candidate value P(i, j) to the data correction unit 162 in the data modification unit 160.
  • In step S109, the data correction unit 162 performs noise/isolated point removal processing of removing noise and isolated points from the shape candidate value P(i, j). In this embodiment, the noise/isolated point removal processing is performed by coring processing. The noise/isolated point removal processing will be explained with reference to the flowchart shown in FIG. 5.
  • In step S210, the data correction unit 162 loads the shape candidate value P(i, j). In this embodiment, the image is assumed to have a size of (p+1) pixels from 0 to p in the horizontal direction and a size of (q+1) pixels from 0 to q in the vertical direction. In step S220, the data correction unit 162 loads correction parameters m, n, and w(k, l).
  • In this embodiment, as shown in FIG. 5, the following processing is sequentially performed for the shape candidate values P(i, j) corresponding to all pixels of an image in steps S231 to S234. In step S231, the data correction unit 162 calculates a reference value Pave(i, j, m, n) of a region including (i, j) based on
  • P ave ( i , j , m , n ) = 1 ( 2 m + 1 ) ( 2 n + 1 ) k = - m m l = - n n w ( k , l ) P ( i + k , j + l ) . ( 7 )
  • As shown in equation (7), the reference value Pave(i, j, m, n) indicates the average value in this region. In equation (7), correction parameters m, n, and w(k, l) determined by the parameter determination unit 144 are used. That is, equation (7) changes in accordance with the effective frequency fν.
  • In step S232, the data correction unit 162 determines whether the difference between the shape candidate value P(i, j) and the reference value Pave(i, j, m, n) is smaller than a predetermined threshold. If the difference between the shape candidate value P(i, j) and the reference value Pave(i, j, m, n) is smaller than a predetermined threshold Thr-1, that is, if “|P(i, j)−Pave(i, j, m, n)|<Thr-1” is true, the process goes to step S234. Note that the threshold Thr-1 is defined based on an empirical rule such as a criterion to determine whether the difference falls within the error range of the reference value or not.
  • On the other hand, if the difference between the shape candidate value P(i, j) and the reference value Pave(i, j, m, n) is not smaller than the predetermined threshold, the data correction unit 162 determines, in step S233, whether the shape candidate value P(i, j) is an isolated point. If the shape candidate value P(i, j) is an isolated point, the process goes to step S234.
  • As the determination whether the shape candidate value P(i, j) is an isolated point, a determination whether “|P(i, j)−Pave(i, j, m, n)|>Thr-2” is true or not is employed, where Thr-2 is a threshold which is set based on the variance in a predetermined region of a plurality of pixels. More specifically, for example, when the variance is σ, Thr-2 is set as ±2σ.
  • In step S234, the data correction unit 162 sets the value of the shape candidate value P(i, j) to the reference value Pave(i, j, m, n). The processes in steps S231 to S234 are performed for all pixels. That is, letting ΔT be the predetermined threshold, this processing is represented by a noise-removed shape candidate value P′(i, j) that is the shape candidate value after the processing and given by
  • P ( i , j ) = { P ( i , j ) : P ( i , j ) - P ave ( i , j , m , n ) Δ T P ave ( i , j , m , n ) : P ( i , j ) - P ave ( i , j , m , n ) < Δ T . ( 8 )
  • The concept of coring processing used in this embodiment will be explained with reference to FIGS. 6A, 6B, and 60. FIG. 6A shows an original signal corresponding to the shape candidate value P(i, j). A moving average corresponding to the average value calculated by equation (7) for the original signal is indicated by the dashed-dotted line in FIG. 6B. A value obtained by adding or subtracting a threshold corresponding to the predetermined threshold ΔT to or from the moving average is indicated by a broken line in FIG. 6B. In this case, as represented by equation (8), when the original signal is located between the two broken lines in FIG. 6B, the original signal is replaced with the moving average indicated by the dashed-dotted line. As a consequence, a result as shown in FIG. 6C is obtained. Note that in FIG. 6C, a circle indicates a value replaced with the moving average. As described above, the coring processing has the effect of suppressing a variation component determined as a small amplitude signal and deleting an error.
  • The data correction unit 162 outputs the value obtained by performing the noise/isolated point removal processing described with reference to FIG. 5 for the shape candidate value P(i, j), that is, the noise-removed shape candidate value P′(i, j) to the data interpolation unit 164.
  • In step S110, the data interpolation unit 164 performs interpolation processing, i.e., the data interpolation unit 164 interpolates data whose noise-removed shape candidate value P′(i, j) input from the data correction unit 162 represents inestimability. Inestimability means that the shape candidate estimation unit 154 could not specify the in-focus state of an image when calculating the shape candidate value P(i, j) based on the contrast evaluation value calculated by the contrast evaluation unit 152. That is, inestimability indicates that the contrast evaluation value of any of a plurality of microscopic images for a pixel of interest does not meet a condition representing a predetermined in-focus state.
  • If values around the noise-removed shape candidate value P′(i, j) representing inestimability are not inestimable, that is, if only one pixel out of a region of, for example, 5 pixels×5 pixels is inestimable, the data interpolation unit 164 interpolates the inestimable data using neighboring data. At this time, the data interpolation unit 164 can use, for example, bilinear interpolation or bicubic interpolation for the data interpolation.
  • On the other hand, if noise-removed shape candidate values P′(i, j) representing inestimability continuously exist, the data interpolation unit 164 interpolates the inestimable data based on a function representing the correlation to neighboring data. That is, the distribution around the inestimable portion is assumed, thereby estimating the value of the portion. In this embodiment, kernel regression method is used in interpolation. At this time, the data interpolation unit 164 uses the interpolation parameters σk and σl input from the parameter determination unit 144. An example of the interpolation processing will be described with reference to the flowchart of FIG. 7.
  • In step S310, the data interpolation unit 164 loads the noise-removed shape candidate value P′(i, j). In step S320, the data interpolation unit 164 loads the interpolation parameters σk and σl. Next, the data interpolation unit 164 calculates interpolation data R(i, j). The interpolation data R(i, j) is given by
  • R ( i , j ) = 1 N P ( i + k , j + l ) 0 P ( i + k , j + l ) C ( k , l ) , ( 9 )
  • where N is the number of sampling points which is given by

  • N=(2k+1)·(2l−1).  (10)
  • In addition, C(k, j) is given by
  • C ( k , l ) = B · exp ( - 1 2 ( k σ k ) 2 ) exp ( - 1 2 ( l σ l ) 2 ) . ( 11 )
  • As indicated by equation (11), C(k, l) is determined in accordance with the interpolation parameters σk and σl. B is a variable number.
  • In step S331, the data interpolation unit 164 updates the variable B. In step S332, the data interpolation unit 164 superimposes a Gaussian kernel on the noise-removed shape candidate value P′(i, j) based on equations (9) to (11). In step S333, the data interpolation unit 164 determines whether the value obtained in step S332 meets a predetermined convergence condition which is, for example, given by
  • i , j A P ( i , j ) - R ( i , j ) < Thr , ( 12 )
  • where Thr is a predetermined threshold. If the value meets the convergence condition, the process goes to step S340. On the other hand, if the value does not meet the convergence condition, the processes in steps S331 to S333 are repeated up to a predetermined count D. That is, the interpolation data R(i, j) for each variable B is calculated in step S332, and it is determined in step S333 whether the calculated interpolation data R(i, j) meets the convergence condition until the convergence condition is met while changing the value of the variable B in step S331.
  • Upon determining in step S333 that the interpolation data R(i, j) meets the convergence condition, in step S340, the data interpolation unit 164 generates expansion data based on the interpolation data R(i, j) that meets the convergence condition. In step S350, the data interpolation unit 164 assigns the generated expansion data to the inestimable data of the noise-removed shape candidate values P′(i, j), thereby generating an interpolated shape candidate value P″(i, j). The data interpolation unit 164 outputs the generated interpolated shape candidate value P″(i, j) to the 3D shape estimation unit 170.
  • Referring back to FIG. 4, explanation will be continued. In step 111, the 3D shape estimation unit 170 optimizes depth information based on the interpolated shape candidate value P″(i, j) input from the data interpolation unit 164, and estimates the 3D shape of the object. The 3D shape estimation unit 170 outputs the estimated 3D shape of the sample to the image synthesis unit 180.
  • In step S112, the image synthesis unit 180 synthesizes a synthesized image by combining the plurality of images having different focal position based on the 3D shape of the object input from the 3D shape estimation unit 170 and the plurality of images acquired from the image acquisition unit 110. If the synthesized image is, for example, a 3D reconstructed image, the synthesized image is created by synthesizing the 3D shape with the in-focus images concerning the respective portions of the 3D shape. If the synthesized image is, for example, an all-in-focus image, images extracted from images having focal position corresponding to the depth of the respective pixels are combined, thereby synthesizing an image that is in focus for all pixels. The image synthesis unit 180 outputs the created synthesized image to a display unit or a storage device.
  • In the case of that an image of an object whose depth is greater than the depth of field is taken, such as a microscope image, it is difficult for the user to recognize the image. However, by a 3D reconstructed image or an all-in-focus image, the user can easily recognize the image of an object whose depth is greater than the depth of field.
  • As described above, for example, the image acquisition unit 110 functions as an image acquisition unit configured to acquire a plurality of images obtained by capturing a single object at different focus positions. For example, the candidate value estimation unit 150 functions as a candidate value estimation unit configured to estimate, for each pixel of the images, a candidate value of a 3D shape based on the plurality of images. For example, the band characteristics evaluation unit 130 functions as a band characteristics evaluation unit configured to calculate, for each pixel of the plurality of images, the band evaluation value of a band included in the images for each of a plurality of frequency bands. For example, the effective frequency determination unit 140 functions as an effective frequency determination unit configured to determine the effective frequency of the pixel based on statistical information of the band evaluation value. For example, the data modification unit 160 functions as a candidate value modification unit configured to perform at least one of data correction and data interpolation for the candidate value based on the effective frequency and calculate a modified candidate value representing the 3D shape of the object. For example, the data correction unit 162 functions as a modified candidate value calculation unit configured to calculate a modified candidate value using correlation of the value of a local region represented by the candidate value. For example, the 3D shape estimation unit 170 functions as an all-in-focus image creation unit or a 3D reconstructed image creation unit.
  • According to this embodiment, as the result of the noise/isolated point removal processing by the data correction unit 162, errors caused by noise and estimation processing are effectively reduced in the images. In this embodiment, correction parameters m and n used in the noise/isolated point removal processing are determined based on the effective frequency fν of the images. The lower the effective frequency fν is, the larger the values of correction parameters m and n are. For this reason, in equation (7), as the effective frequency fν decreases, the reference value Pave(i, j, m, n) is calculated based on the shape candidate values P(i, j) in a wider region. As the effective frequency fν increases, the reference value Pave(i, j, m, n) is calculated based on the shape candidate values P(i, j) in a narrower region. That is, the optimum reference value Pave(i, j, m, n) is calculated in accordance with the effective frequency fν of the images. As a result, noise can accurately be reduced as compared to a case in which the effective frequency fν of the images is not taken into consideration. That is, the shape candidate values P(i, j) are not excessively smoothed. Even if many noise components exist, the input signal is not excessively evaluated as a high frequency signal.
  • In the interpolation processing of the data interpolation unit 164, information of the effective frequency fν of the images is used when assuming the correlation of neighboring data. That is, an optimized Gaussian kernel corresponding to the frequency band can be generated, and the value of the depth of the object at a position, which is inestimable based on the contrast evaluation value, can be estimated. At this time, the interpolation parameters σk and σl are given based on the effective frequency fν. It is therefore possible to increase the processing speed due to the small calculation amount and prevent the calculation result from converging to an incorrect value by comparison with a case in which the convergence value is searched for while changing the values of the interpolation parameters. The lower the effective frequency fν is, the larger the values of the interpolation parameters σk and σl are. For this reason, in equation (9), the interpolation data R(i, j) is calculated based on the noise-removed shape candidate values P′(i, j) in a wider region. As the effective frequency fν increases, the interpolation data R(i, j) is calculated based on the noise-removed shape candidate values P′(i, j) in a narrower region. That is, the noise-removed shape candidate values P′(i, j) are not excessively smoothed. Edge structure evaluation is appropriately done. Even if many noise components exist, the input signal is not excessively evaluated as a high frequency signal.
  • Note that the above-described equations are merely examples. Not these equations but any other equations may be used as a matter of course as far as the above-described effects can be obtained. For example, polynomials of real number order, logarithmic functions, or exponential functions are usable in place of equations (5) and (6). A variance or the like is also usable in place of equation (1). In the above-described embodiment, the processing is performed for each pixel. However, the processing may be performed for each region including a plurality of pixels.
  • Modification of First Embodiment
  • A modification of the first embodiment will be described. Points of difference from the first embodiment will be explained here. The same reference numbers denote the same parts, and a description thereof will be omitted. In the processing of the data interpolation unit 164 according to the first embodiment, the interpolation parameters σk and σl are set to σk and σl in equation (11), and these values remain unchanged in the loop processing of steps S331 to S333 described with reference to FIG. 7.
  • In this modification, however, the convergence value is searched for while changing σk and σl as well in step S331. Hence, in this modification, the parameter determination unit 144 outputs a range or probability density function capable of setting the interpolation parameters σk and σl to the data interpolation unit 164. In the loop processing of steps S331 to S333, the data interpolation unit 164 searches for the convergence value while changing σk and σl as well based on the range or probability density function capable of setting the interpolation parameters σk and σl and input from the parameter determination unit 144. The rest of the operation is the same as in the first embodiment.
  • According to this modification, although the amount of processing is greater than in the first embodiment, the interpolation data R(i, j) can converge to a convergence value more suitable than in the first embodiment. In the modification as well, the parameter determination unit 144 determines the range or probability density function capable of setting the interpolation parameters σk and σl based on the effective frequency fν of the images. Hence, the same effects as in the first embodiment can be obtained.
  • Second Embodiment
  • The second embodiment of the present invention will be described. Points of difference from the first embodiment will be explained here. The same reference numbers denote the same parts, and a description thereof will be omitted. In this embodiment, a data correction unit 162 uses a bilateral filter to remove noise. The bilateral filter used in this embodiment is expressed as
  • P ( i , j ) = k = - m m l = - n n P ( i + k , j + l ) C ( k , l ) S ( P ( i , j ) - P ( i + k , i + l ) ) k = - m m l = - n n C ( k , l ) S ( P ( i , j ) - P ( i + k , i + l ) ) , ( 13 )
  • where C(k, l) is a factor that specifies the distance correlation, and S(P1−P2) is a factor that specifies correlation resulting from the pixel level difference between different pixels. The sharpness and the signal-to-noise ratio of a generated image change depending on what kind of distribution function is used for C(k, l) and S(P1−P2).
  • In this embodiment, for example, functions based on a Gaussian distribution is used for the C(k, l) and S(P1−P2). That is, C(k, l) is given by, for example,
  • C ( k , l ) = C 5 · exp ( - 1 2 ( k σ k ) 2 ) exp ( - 1 2 ( l σ l ) 2 ) , ( 14 )
  • where σk and σl are correction parameters, and C5 is a predetermined constant. Correction parameters σk and σl are the same as the interpolation parameters σk and σl of the first embodiment. In addition, S(P1−P2) is given by
  • S ( P 1 - P 2 ) = C 6 · exp ( - 1 2 ( P 1 - P 2 σ P ) 2 ) , ( 15 )
  • where σp is a correction parameter, and C6 is a predetermined constant. In this embodiment, a parameter determination unit 144 determines even correction parameter σp based on a effective frequency fν of the images by looking up a lookup table. The lower the effective frequency fν is, the larger the value of correction parameter σp is.
  • Since the correction parameter σp and the frequency have positive correlation, the parameter determination unit 144 may obtain the correction parameter σp using an Mth-order (M is an integer greater than 0) polynomial as given by
  • σ p = m = 0 M C p ( m ) fv m . ( 16 )
  • As in the first embodiment, when the information of the effective frequency fν of the images is acquired, the original sharpness of the images can be estimated. For example, when the effective frequency fν is low, C(k, l) is set so as to emphasize long-distance correlation, and S(P1−P2) is set based on the assumption that no abrupt step is generated with respect to neighboring data. As described above, for example, S(P1−P2) functions as first correlation that is correlation between the values of two points spaced apart. For example, C(k, l) functions as second correlation that is correlation by the distance.
  • In this embodiment, information of the original frequency band of the images is used when assuming the correlation of neighboring data. The bilateral filter is set based on the correlation of neighboring data. According to this embodiment, it is consequently possible to acquire a noise-removed shape candidate value P′(i, j) by effectively reducing noise and errors of a shape candidate value P(i, j).
  • Note that in this embodiment as well, correction parameters σk, σl, and σp may be set as a probability density function, as in the modification of the first embodiment. In this case as well, the same effects as in this embodiment can be obtained.
  • Modification of Second Embodiment
  • A modification of the second embodiment will be described. Points of difference from the second embodiment will be explained here. The same reference numbers denote the same parts, and a description thereof will be omitted. In this modification, the data correction unit 162 uses a trilateral filter to remove noise. The trilateral filter used in this modification is expressed as
  • P ( i , j ) = P ( i , j ) + k = - m m l = - n n P Δ ( i , j , k , l ) C ( k , l ) S ( P Δ ( i , j , k , l ) ) N ( i , j , k , l ) k = - m m l = - n n C ( k , l ) S ( P Δ ( i , j , k , l ) ) N ( i , j , k , l ) , ( 17 )
  • where PΔ(i, j, k, l) is given by

  • PΔ(i,j,k,l)=P(i+k,j+l)−P f(i,j,k,l).  (18)
  • In addition, N(i, j, k, l) is given by
  • N ( i , j , k , l ) = { 1 if U ( i + k , j + l ) - U ( i , j ) < Thr 0 else , ( 19 )
  • where U(i, j) is the smoothed gradient vector which is given by
  • U ( i , j ) = k = - m m l = - n n P ( i + k , j + l ) C ( k , l ) S ( P ( i , j ) - P ( i + k , j + l ) ) k = - m m l = - n n C ( k , l ) S ( P ( i , j ) - P ( i + k , j + l ) ) , ( 20 )
  • where Pf(i, j, k, l) is given by

  • P f(i,j,k,l)=P(i,j)+U(i,j)i ·k+U(i,j)j ·l,  (21)
  • where U(i, j)i is the horizontal component of the gradient, and U(i, j)j is the vertical component of the gradient.
  • This trilateral filter applies the bilateral filter used in the second embodiment to a gradient ∇P(i, j). Introducing ∇P(i, j) allows to strongly suppress impulse noise, that is, an isolated variation component.
  • Even in this modification, C(k, l) and S(P1−P2) determined in accordance with the effective frequency fν of the images are used, as in the second embodiment. As a result, the same effects as in the second embodiment can be obtained.
  • Third Embodiment
  • The third embodiment of the present invention will be described. Points of difference from the first embodiment will be explained here. The same reference numbers denote the same parts, and a description thereof will be omitted. The third embodiment shows a microscope system 200 comprising the image processing system 100 according to the first embodiment.
  • FIG. 8 shows the outline of an example of the configuration of the microscope system 200 according to this embodiment. As shown in FIG. 8, the microscope system 200 includes a microscope 210 and the image processing system 100 according to the first embodiment. The microscope 210 is, for example, a digital microscope. The microscope 210 includes an LED light source 211, an illumination optical system 212, an optical path control element 213, an objective lens 214, a sample surface 215 placed on a stage (not shown), an observation optical system 218, an imaging plane 219, an imaging unit 220, and a controller 222. The observation optical system 218 includes a zoom optical system 216 and an imaging optical system 217. The objective lens 214, the optical path control element 213, the zoom optical system 216, and the imaging optical system 217 are arranged in this order on the observation optical path from the sample surface 215 to the imaging plane 219.
  • Illumination light emitted by the LED light source 211 enters the optical path control element 213 via the illumination optical system 212. The optical path control element 213 reflects the illumination light toward the objective lens 214 on the observation optical path. The illumination light irradiates a sample placed on the sample surface 215 via the objective lens 214.
  • When irradiated with the illumination light, the sample generates observation light. The observation light is reflected light, fluorescence, or the like. The observation light enters the optical path control element 213. Unlike the illumination light, the optical path control element 213 passes the observation light and makes it enter the observation optical system 218 including the zoom optical system 216 and the imaging optical system 217. The optical path control element 213 is an optical element that reflects or passes incident light in accordance with its characteristic. As the optical path control element 213, for example, a polarizer such as a wire grid or a polarizing beam splitter (PBS), which reflects or passes incident light in accordance with its polarization direction is used. Note that as the optical path control element 213, for example, a dichroic mirror that reflects or passes incident light in accordance with its frequency may be used.
  • The observation optical system 218 condenses the observation light on the imaging plane 219, and forms an image of the sample on the imaging plane 219. The imaging unit 220 generates an image signal based on the image formed on the imaging plane 219, and outputs the image signal as a microscopic image to the image acquisition unit 110. The controller 222 controls the operations of the microscope 210. In this embodiment, the microscope 210 acquires a plurality of microscopic images of a single sample captured on different focal planes. Hence, the controller 222 causes the imaging unit 220 to acquire the image of the sample on each focal plane while controlling the optical system of the microscope 210 to gradually change the focal plane. More specifically, for example, the controller 222 causes the imaging unit 220 to acquire each image while changing the height of the stage, or the position of the height of the objective lens of the microscope 210. The controller 222 outputs the acquired images and the information about the focal position which is associated with the images to the image acquisition unit 110.
  • The operation of the microscope system 200 according to this embodiment will be described. The sample is placed on the stage (not shown) resulting that the sample surface 215 is set. The controller 222 controls the microscope 210. The controller 222 gradually changes the focal position of the optical system for the sample by, for example, gradually changing the position of the sample surface 215 in the optical axis direction. More specifically, for example, the controller 222 changes the height of the stage, the height of the objective lens, or the position of the focus lens of the microscope 210. At this time, the controller 222 causes the imaging unit 220 to sequentially acquire the microscopic image of the sample on each focal position. The image acquisition unit 110 acquires a microscopic image of a sample at each focus position from the imaging unit 220. The image acquisition unit 110 also acquires, from the controller 222, the focus position at the time of capture of each image. The image acquisition unit 110 stores the acquired microscopic image in a storage unit 114 in association with the focus position.
  • Processing of creating a synthesized image by synthesizing a plurality of images at different focus positions based on the microscopic image stored in the storage unit 114 is the same as that of the first embodiment. In this embodiment, the microscope system 200 creates a synthesized image, for example, a 3D reconstructed image or an all-in-focus image concerning the microscopic image. An image synthesis unit 180 outputs the created synthesized image to, for example, a display unit to display it or a storage device to store it. According to the 3D reconstructed image or all-in-focus image, the user can easily recognize an object image having a depth larger than the depth of field, like a general microscopic image.
  • As described above, for example, the illumination optical system 212, the optical path control element 213, the objective lens 214, the observation optical system 218, and the like function as a microscope optical system. For example, the imaging unit 220 functions as an imaging unit configured to acquire an image of a sample via the microscope optical system as a sample image.
  • In general, the image enlargement ratio of the optical system of a microscope is higher than that of the optical system of a digital camera. For this reason, the band of the optical system of the microscope is sometimes not so higher than the sampling band of the image sensor of the camera upon micrography. The band of the optical system can change depending on the numerical aperture, magnification, and the like of the optical system. For example, when the microscope has an optical zoom system, the band of the optical system changes as well. According to this embodiment, the statistical information calculation unit 142 calculates a statistical information value in consideration of the frequency band of the image. The parameter determination unit 144 calculates the correction parameter and the interpolation parameter based on the statistical information value. It is possible to accurately reduce noise and the like and appropriately perform interpolation as compared to a case in which the effective frequency fν of the image is not taken into consideration. This allows the microscope system 200 to accurately create the 3D reconstructed microscopic image or all-in-focus microscopic image.
  • Note that if the optical system of the microscope 210 includes an optical zoom system, the numerical aperture changes depending on the focal length of the optical zoom system, and the band of the microscopic image accordingly changes. For this reason, the embodiment is particularly effective. In the above-described embodiment, the image processing system 100 is the image processing system according to the first embodiment. However, the second embodiment or a modification thereof may be used.
  • Fourth Embodiment
  • The fourth embodiment of the present invention will be described with reference to the accompanying drawing. FIG. 9 shows the outline of an example of the configuration of an image processing system 300 according to this embodiment. As shown in FIG. 9, the image processing system 300 comprises an image acquisition unit 310, a band processing unit 320, a band characteristics evaluation unit 330, a Statistical information calculation unit 340, a weighting factor calculation unit 350, a contrast evaluation unit 360, an in-focus evaluation unit 370, a 3D shape estimation unit 380 and an image synthesis unit 390.
  • The image acquisition unit 310 includes a storage unit 314. The image acquisition unit 310 acquires a plurality of images obtained by capturing a single object while changing the focus position and stores them in the storage unit 314. Each of the images is assumed to include information about the focus position of the optical system at the time of image acquisition, that is, information about the depth of the in-focus positions. The image acquisition unit 310 outputs the images in response to requests from the band processing unit 320 and the image synthesis unit 390.
  • The band processing unit 320 has a filter bank. That is, the band processing unit 320 includes, for example, a first filter 321, a second filter 322, and a third filter 323. The frequency characteristics of the first filter 321, the second filter 322, and the third filter 323 are, for example, as described above with reference to FIG. 2. Note that the first filter 321, the second filter 322, and the third filter 323 may be bandpass filters having frequency characteristics as shown in FIG. 3. Any other filters may be used as long as the plurality of filters are designed to pass different frequency bands. In this embodiment, the band processing unit 320 includes three filters. However, an arbitrary number of filters can be used. The band processing unit 320 acquires the images from the image acquisition unit 310, and performs filter processing for each region (for example, each pixel) of each of the plurality of images at different focus positions using the first filter 321, the second filter 322, and the third filter 323. The band processing unit 320 outputs the result of the filter processing to the band characteristics evaluation unit 330.
  • The band characteristics evaluation unit 330 calculates a band evaluation value for each pixel of the plurality of images that have undergone the filter processing. The band evaluation value is obtained by, for example, calculating the integrated value of the signals that have passed the filters. The band evaluation value is thus obtained for each pixel and each frequency band in each image. The band characteristics evaluation unit 330 outputs the calculated band evaluation value to the statistical information calculation unit 340 and the contrast evaluation unit 360.
  • The statistical information calculation unit 340 calculates, for each frequency band, a statistical information value related to the average of the band evaluation values of the plurality of images at different focus positions. The statistical information will be described later. The statistical information calculation unit 340 outputs the calculated statistical information value to the weighting factor calculation unit 350. The weighting factor calculation unit 350 calculates a value concerning weighting, that is, a weighting factor for each frequency band based on the statistical information value input from the statistical information calculation unit 340. The weighting factor will be described later. The weighting factor calculation unit 350 outputs the calculated weighting factor to the contrast evaluation unit 360.
  • The contrast evaluation unit 360 multiplies the band evaluation value input from the band characteristics evaluation unit 330 by the weighting factor of the corresponding band input from the weighting factor calculation unit 350, thereby calculating a contrast evaluation value. The contrast evaluation unit 360 outputs the calculated contrast evaluation value to the in-focus evaluation unit 370. Based on the contrast evaluation value input from the contrast evaluation unit 360, the in-focus evaluation unit 370 evaluates the in-focus state of each region of each of the plurality of images at different focus positions. The in-focus evaluation unit 370 selects an in-focus image for each region and estimates, based on the information of the focus position at the time of capture of the image, depth information corresponding to each region of the image. The in-focus evaluation unit 370 outputs the depth information of each region of the image to the 3D shape estimation unit 380.
  • The 3D shape estimation unit 380 optimizes depth information based on the depth information input from the in-focus evaluation unit 370, and estimates the estimated value of the 3D shape of the object. The 3D shape estimation unit 380 outputs the estimated 3D shape of the object to the image synthesis unit 390. The image synthesis unit 390 synthesizes a synthesized image by combining the plurality of images having different focal position based on the 3D shape of the object input from the 3D shape estimation unit 380 and the plurality of images acquired from the image acquisition unit 310. This synthesized image is, for example, a 3D reconstructed image or an all-in-focus image. The image synthesis unit 390 outputs the created synthesized image to, for example, a display unit to display it, or outputs the synthesized image to, for example, a storage device to store it.
  • An example of the operation of the image processing system 300 according to this embodiment will be described with reference to the flowchart of FIG. 10. In step S401, the image acquisition unit 310 acquires a plurality of images obtained by capturing a single object while changing the focus position. Each of the images is assumed to include information about the depth (for example, information about the focus position of the optical system at the time of acquiring the image). The image acquisition unit 310 stores the acquired images in the storage unit 314.
  • In step S402, the band processing unit 320 performs filter processing for each area (for example, each pixel) of the plurality of images at different focus positions stored in the storage unit 314 using, for example, the first filter 321, the second filter 322, and the third filter 323. An arbitrary number of filters can be used. Hence, the following description will be made assuming that the band processing unit 320 includes N filters. The band processing unit 320 outputs the result of the filter processing to the band characteristics evaluation unit 330.
  • In step S403, the band characteristics evaluation unit 330 calculates, for each band, a band evaluation value for each region of the plurality of images that have undergone the filter processing. That is, the band characteristics evaluation unit 330 calculates, for each frequency band fn (n=1, 2, . . . , N), a band evaluation value Q(k, fn, i, j) for each focus position k (k=1, 2, . . . , K) and each pixel (i, j) (each pixel (i, j) included in a whole region A of the image), that is, for each data I(k, i, j). The band evaluation value Q(k, fn, i, j) is calculated as, for example, the integrated value of the signals that have passed the filters, which is an amount corresponding to the amplitude in each band the filter passes. The band characteristics evaluation unit 330 outputs the band evaluation value Q(k, fn, i, j) to the statistical information calculation unit 340.
  • In step S404, the statistical information calculation unit 340 calculates, for each frequency band, a statistical information value related to the average of the band evaluation values Q(k, fn, i, j) of the plurality of images at different focus positions. As the statistical information value, various values calculated by various methods are usable, as will be described later. The statistical information calculation unit 340 outputs the calculated statistical information value to the weighting factor calculation unit 350.
  • In step S405, the weighting factor calculation unit 350 calculates a weighting factor corresponding to each band based on the statistical information value input from the statistical information calculation unit 340. As the weighting factor as well, various values calculated by various methods are usable, as will be described later. The weighting factor calculation unit 350 outputs the calculated weighting factor to the contrast evaluation unit 360.
  • In step S406, the contrast evaluation unit 360 multiplies the band evaluation value Q(k, fn, i, j) input from the band characteristics evaluation unit 330 by the weighting factor of the corresponding frequency band out of the weighting factors input from the weighting factor calculation unit 350, thereby calculating a contrast evaluation value. The contrast evaluation unit 360 outputs the calculated contrast evaluation value to the in-focus evaluation unit 370.
  • In step S407, the in-focus evaluation unit 370 evaluates an in-focus state based on the contrast evaluation value acquired from the contrast evaluation unit 360. For example, the in-focus evaluation unit 370 specifies, for each of the plurality of images at different focus positions, a region where the contrast evaluation value is higher than a predetermined threshold as an in-focus region. The in-focus evaluation unit 370 estimates depth information for a point corresponding to the region from the in-focus region out of the plurality of images at different focus positions and information about the focus position at the time of acquiring the image including the region. The depth information is, for example, a value representing the position of the region in the depth direction. The in-focus evaluation unit 370 outputs the depth information of each region to the 3D shape estimation unit 380.
  • In step 408, the 3D shape estimation unit 380 optimizes depth information such as smoothing based on the depth information input from the in-focus evaluation unit 370, and estimates the 3D shape of the object. The 3D shape estimation unit 380 outputs the estimated 3D shape of the sample to the image synthesis unit 390.
  • In step S409, the image synthesis unit 390 synthesizes a synthesized image by combining the plurality of images having different focal position based on the 3D shape of the object input from the 3D shape estimation unit 380. If the synthesized image is, for example, a 3D reconstructed image, the synthesized image is created by synthesizing the 3D shape with the in-focus images concerning the respective portions of the 3D shape. If the synthesized image is, for example, an all-in-focus image, images extracted from images having focal position corresponding to the depth of the respective pixels are combined, thereby synthesizing an image that is in focus for all pixels. The image synthesis unit 390 outputs the created synthesized image to a display unit or a storage device.
  • In the case of that an image of an object whose depth is greater than the depth of field is taken, such as a microscope image, it is difficult for the user to recognize the image. However, by a 3D reconstructed image or an all-in-focus image, the user can easily recognize the image of an object whose depth is greater than the depth of field.
  • As described above, for example, the image acquisition unit 310 functions as an image acquisition unit configured to acquire a plurality of images obtained by capturing a single object at different focus positions. For example, the band characteristics evaluation unit 330 functions as a band characteristics evaluation unit configured to calculate, for each pixel of images, the band evaluation value of a band included in the image for each of a plurality of frequency bands. For example, the statistical information calculation unit 340 functions as a statistical information calculation unit configured to calculate, for at least each of the plurality of the frequency bands, statistical information using the band evaluation values of at least two focus positions. For example, the weighting factor calculation unit 350 functions as a weighting factor calculation unit configured to calculate, for at least each of the plurality of the frequency bands, a weighting factors corresponding to the band evaluation values based on the statistical information. For example, the contrast evaluation unit 360 functions as a contrast evaluation unit configured to calculate a contrast evaluation values for each region including the at least one pixel in the plurality of images based on the band evaluation values and the weighting factors. For example, the in-focus evaluation unit 370 functions as an in-focus evaluation unit configured to select an in-focus region out of the regions of the plurality of images based on the contrast evaluation values. For example, the image synthesis unit 390 functions as an all-in-focus image creation unit or a 3D reconstructed image creation unit.
  • According to this embodiment, the band characteristics evaluation unit 330 performs filter processing. The contrast evaluation unit 360 calculates a contrast evaluation value based on a band evaluation value obtained as the result of the filter processing. In general, a contrast evaluation value representing more accurate contrast evaluation is obtained using a filter having a high spectrum for a high frequency. On the other hand, if a contrast evaluation value is calculated based on information of a frequency higher than the frequency band of an image, an inappropriate contrast evaluation value is obtained, which evaluates a factor such as noise irrelevant to the object structure. In this embodiment, the statistical information calculation unit 340 calculates a statistical information value in consideration of the frequency band of the image. The weighting factor calculation unit 350 calculates a weighting factor based on the statistical information value. That is, the weighting factor is determined in consideration of the frequency band of the image. The contrast evaluation unit 360 determines the contrast evaluation value based on the band evaluation value calculated by the band characteristics evaluation unit 330 and the weighting factor calculated by the weighting factor calculation unit 350. It is therefore possible to determine a more accurate contrast evaluation value as compared to a case in which the frequency band of the image is not taken into consideration. This allows the image processing system 300 to accurately create the 3D reconstructed image or all-in-focus image. The image processing system 300 is particularly effective when used for a microscopic image captured by a microscope having a shallow depth of field.
  • Detailed examples of the statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 will be described next.
  • First Example
  • The statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 according to the first example will be described. In this example, the statistical information value is the average of the band evaluation values Q(k, fn, i, j) at all focus positions (k=1, 2, . . . , K) for each of the regions and frequency bands. That is, the average L(fn, i, j) is calculated by
  • L ( f n , i , j ) = 1 K k = 1 K Q ( k , f n , i , j ) . ( 22 )
  • The weighting factor is a value obtained by dividing the average for each of the regions and frequency bands by the sum of the averages for all frequency bands. That is, a weighting factor LN(fm, i, j) is calculated by
  • L N ( f m , i , j ) = L ( f m , i , j ) f n = f 1 f N L ( f n , i , j ) . ( 23 )
  • Based on the band evaluation value Q(k, fn, i, j) and the weighting factor LN(fm, i, j), a contrast evaluation value D(k, i, j) is calculated by
  • D ( k , i , j ) = f m = f 1 f N L N ( f m , i , j ) · Q ( k , f m , i , j ) . ( 24 )
  • That is, the contrast evaluation value D(k, i, j) is the sum of the products of the band evaluation value Q(k, fn, i, j) and the weighting factor LN(fm, i, j) of the respective frequency bands.
  • In step S407, the in-focus evaluation unit 370 selects, for example, k that makes the contrast evaluation value D(k, i, j) highest for each region (i, j) and estimates the depth information.
  • Note that in this example, when the weighting factor LN(fm, i, j) is calculated, the average is divided by the sum of the averages for all frequency bands, as indicated by equation (23). However, the weighting factor LN(fm, i, j) may be obtained by dividing the average by the sum of the averages not for all frequency bands but for some frequency bands.
  • Second Example
  • The statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 according to the second example will be described. In this example, the statistical information value is the average of the band evaluation values Q(k, fn, i, j) at all focus positions for each of the regions and frequency bands, as in the first example. That is, the average L(fn, i, j) is calculated by equation (22). In this example, the average L(fn, i, j) is used as the weighting factor. Hence, the contrast evaluation value D(k, i, j) is calculated by
  • D ( k , i , j ) = f m = f 1 f N L ( f n , i , j ) · Q ( k , f n , i , j ) . ( 25 )
  • In step S407, the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • Third Example
  • The statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 according to the third example will be described. In this example, the statistical information value is the average of the band evaluation values Q(k, fn, i, j) at all focus positions for each of the regions and frequency bands, as in the first example. That is, the average L(fn, i, j) is calculated by equation (22). In this example, a relative value of the average L(fn, i, j) to a predetermined frequency band f0 is used as the weighting factor. That is, the weighting factor LN(fm, i, j) is calculated by
  • L N ( f m , i , j ) = L ( f m , i , j ) L ( f 0 , i , j ) .. ( 26 )
  • As the band f0, any value out of n=1 to N is usable. For example, the lowest band is used. The contrast evaluation value D(k, i, j) is calculated by equation (24) using the weighting factor LN(fm, i, j).
  • In step S407, the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • Fourth Example
  • The statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 according to the fourth example will be described. In this example, the statistical information value is the average of the band evaluation values at all focus positions for each of the regions and frequency bands, as in the first example. That is, the average L(fn, i, j) is calculated by equation (22). In this example, the weighting factor is set to 1 or 0 depending on whether to meet a predetermined condition. That is, whether to use the band evaluation value Q(k, fn, i, j) is determined in accordance with whether the condition is met. In this example, the weighting factor is calculated by
  • L N ( f m , i , j ) = { 1 if L ( f m , i , j ) f n = f 1 f N L ( f n , i , j ) > Thr 0 else , ( 27 )
  • where a threshold Thr is an arbitrary design value such as 0.2 when N=3. The contrast evaluation value D(k, i, j) is calculated by equation (24) as in the first example.
  • In step S407, the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j). Note that the judgment when determining the weighting factor may be done using not a value obtained by dividing the average L(fn, i, j) by the sum of the averages for all frequency bands as in equation (27) but the average L(fn, i, j) itself or a value obtained by dividing the average L(fn, i, j) by the sum of the averages for arbitrary frequency bands or the averages for arbitrary frequency bands.
  • According to the first to fourth examples, since the weighting factor LN(fm, i, j) is calculated for each region, it is particularly effective when the band characteristic is not constant among the regions of an image. According to these examples, the average L(fn, i, j) as a statistical information value is calculated for each frequency band. When the average L(fn, i, j) is small, information necessary for evaluating the contrast is not included in the band evaluation value Q(k, fn, i, j), or the band evaluation value Q(k, fn, i, j) includes noise. According to the first to fourth examples, the weight for the band evaluation value Q(k, fn, i, j) that does not include the necessary information or includes noise is made small. It is therefore possible to prevent the band evaluation value Q(k, fn, i, j) that, for example, does not include the necessary information from affecting the contrast evaluation value. As a result, the accurate band evaluation value Q(k, fn, i, j) is generated, and accurate depth information estimation is implemented based on the band evaluation value Q(k, fn, i, j).
  • Fifth Example
  • The statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 according to the fifth example will be described. In this example, the statistical information value is the variation of the band evaluation values Q(k, fn, i, j) at all focus positions (k=1, 2, . . . , K) for each of the regions and frequency bands. In this example, a variance is used as an example of the variation. That is, a variance S(fn, i, j) is calculated, using, for example, the average L(fn, i, j) calculated by equation (22), by
  • S ( f n , i , j ) = k = 1 K ( Q ( k , f n , i , j ) - L ( f n , i , j ) ) 2 . ( 28 )
  • The weighting factor is a value obtained by dividing the variance for each of the regions and frequency bands by the sum of the variances for all frequency bands. That is, a weighting factor SN(fm, i, j) is calculated by
  • S N ( f m , i , j ) = S ( f m , i , j ) f n = f 1 F N S ( f n , i , j ) . ( 29 )
  • Using the band evaluation value Q(k, fn, i, j) and the weighting factor SN(fm, i, j), the contrast evaluation value D(k, i, j) is calculated by
  • D ( k , i , j ) = f m = f 1 f N 1 S N ( f m , i , j ) Q ( k , f m , i , j ) . ( 30 )
  • In step S407, the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j). Note that in this example, when calculating the weighting factor SN(fm, i, j), the average is divided by the sum of the averages for all frequency bands, as indicated by equation (29). However, the weighting factor SN(fm, i, j) may be obtained by dividing the average by the sum of the averages not for all frequency bands but for some frequency bands.
  • Sixth Example
  • The statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 according to the sixth example will be described. In this example, the statistical information value is the average of the variances of the band evaluation values Q(k, fn, i, j) at all focus positions for each of the regions and frequency bands, as in the fifth example. That is, the variance S(fn, i, j) is calculated by equation (28). In this example, the variance S(fn, i, j) is used as the weighting factor. Hence, the contrast evaluation value D(k, i, j) is calculated by
  • D ( k , i , j ) = f m = f 1 f N 1 S ( f m , i , j ) Q ( k , f m , i , j ) . ( 31 )
  • In step S407, the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • Seventh Example
  • The statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 according to the seventh example will be described. In this example, the statistical information value is the variance of the band evaluation values Q(k, fn, i, j) at all focus positions for each of the regions and frequency bands, as in the fifth example. That is, the variance S(fn, i, j) is calculated by equation (28). In this example, a relative value of the variance S(fn, i, j) to the predetermined frequency band f0 is used as the weighting factor. That is, the weighting factor SN(fm, i, j) is calculated by
  • S N ( f m , i , j ) = S ( f m , i , j ) S ( f 0 , i , j ) . ( 32 )
  • As the band f0, any value out of n=1 to N is usable. The contrast evaluation value D(k, i, j) is calculated by equation (30) using the weighting factor SN(fm, i, j).
  • In step S407, the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • According to the fifth to seventh examples, since the weighting factor SN(fm, i, j) is calculated for each region, it is particularly effective when the band characteristic is not constant among the regions of an image. According to these examples, the variance S(fn, i, j) as a statistical information value is calculated for each frequency band. When the variance S(fn, i, j), that is, the variation is small, information necessary for evaluating the contrast is not included in the band evaluation value Q(k, fn, i, j), or the band evaluation value Q(k, fn, i, j) includes noise, and the variation becomes relatively small. According to the fifth to seventh examples, the weight for the band evaluation value Q(k, fn, i, j) that does not include the necessary information or includes noise becomes small. It is therefore possible to prevent the band evaluation value Q(k, fn, i, j) that, for example, does not include the necessary information from affecting the contrast evaluation value. As a result, the accurate band evaluation value Q(k, fn, i, j) is generated, and accurate depth information estimation is implemented based on the band evaluation value Q(k, fn, i, j).
  • Note that when the variation is used, the weighting factor may be set to 1 or 0 depending on whether to meet a predetermined condition, as in the fourth example. That is, whether to use the band evaluation value Q(k, fn, i, j) is determined in accordance with whether the condition is met. In this case as well, the same effect as in the fifth to seventh examples can be obtained.
  • Eighth Example
  • The statistical information value calculated in step S404, the weighting factor calculated in step S405, and the contrast evaluation value calculated in step S406 according to the eighth example will be described. In the first to fourth examples, the average L(fn, i, j) is determined for each region as the statistical information value. In the eighth example, however, the statistical information value is the average in a whole image A for each band. That is, the average L(fn) is calculated by
  • L ( f n ) = 1 A 1 K i , j A k = 1 K Q ( k , f n , i , j ) . ( 33 )
  • The weighting factor is a value obtained by dividing the average L(fn) by the sum of the averages L(fn) for all frequency bands. That is, a weighting factor LN(fm) is calculated by
  • L N ( f m ) = L ( f m ) n = 1 N L ( f n ) . ( 34 )
  • Using the band evaluation value Q(k, fn, i, j) and the weighting factor LN(fm), the contrast evaluation value D(k, i, j) is calculated by
  • D ( k , i , j ) = f m = f 1 f N L N ( f m ) · Q ( k , f m , i , j ) . ( 35 )
  • In step S407, the in-focus evaluation unit 370 estimates the depth information based on the contrast evaluation value D(k, i, j).
  • According to this example, when the difference in band characteristic between regions is small, the calculation amount can effectively be reduced. In this example as well, the average may be divided by the sum of the averages not for all frequency bands as indicated by equation (34) but for some frequency bands or the average for specific frequency bands. In equation (35), L(fm) may be used in place of LN(fm). LN(fm) may be 1 or 0. As in the fifth to seventh examples, the average of the variances in the whole region A of the image may be used.
  • Modification of Fourth Embodiment
  • A modification of the fourth embodiment will be described. Points of difference from the fourth embodiment will be explained here. The same reference numbers denote the same parts, and a description thereof will be omitted. In the image processing system 300 according to this modification, the band processing unit 320 executes wavelet transformation instead of having a filter bank.
  • In the wavelet transformation, filter processing having a specific directivity is performed for an original image as shown on the left side of FIG. 11, thereby acquiring images A, B, and C after band separation, as shown on the right side of FIG. 11. The filter processing having the specific directivity is performed again for an image obtained by reducing the filter residual image, thereby acquiring images D, E, and F. Such processing is repeated to acquire images G, H, and I and images J, K, L, and M. When such transformation processing is executed, image data represented by multi-resolution is created, as shown on the right side of FIG. 11. With this wavelet transformation, an amount corresponding to the gain of a specific band is associated with each region of the image, as in the fourth embodiment.
  • FIG. 12 is a flowchart illustrating an example of processing of the image processing system 300 according to this modification. In step S501, the image acquisition unit 310 acquires a plurality of images obtained by capturing a single object while changing the focus position and stores the images in the storage unit 314. In step S502, the band processing unit 320 performs wavelet transformation for the plurality of images at different focus positions stored in the storage unit 314. The band processing unit 320 outputs the transformation result to the band characteristics evaluation unit 330. In step S503, the band characteristics evaluation unit 330 calculates an evaluation value for each region (p, q) of the plurality of images that have undergone the wavelet transformation. That is, the coefficient of the number n of stages of the wavelet transformation is set as a band evaluation value Q(k, n, p, q) for each region (p, q), that is, for each data I(k, p, q). The band characteristics evaluation unit 330 outputs the band evaluation value Q(k, n, p, q) to the statistical information calculation unit 340.
  • In step S504, the statistical information calculation unit 340 calculates a statistical information value. In this modification, the average of the band evaluation values Q(k, n, p, q) at all focus positions k=1, 2, . . . , K in each band is defined as the statistical information value L(fn). That is, the statistical information value L(fn) is calculated by
  • L ( f n ) = 1 F n 1 K p , q F n k = 1 K Q ( k , n , p , q ) , ( 36 )
  • where Fn represents the size of the image corresponding to the number n of stages of wavelet transformation. The statistical information calculation unit 340 outputs the calculated statistical information value to the weighting factor calculation unit 350.
  • In step S505, the weighting factor calculation unit 350 calculates a weighting factor corresponding to each band based on the statistical information value L(fn) input from the statistical information calculation unit 340. In this modification, the weighting factor LN(fm) is calculated by
  • L N ( f m ) = L ( f m ) f n = f 1 f N L ( f n ) . ( 37 )
  • The weighting factor calculation unit 350 outputs the calculated weighting factor LN(fm) to the contrast evaluation unit 360.
  • In step S506, the contrast evaluation unit 360 multiplies the band evaluation value Q(k, n, p, q) input from the band characteristics evaluation unit 330 by the weighting factor LN(fm) of the corresponding frequency band input from the weighting factor calculation unit 350, and performs inverse transformation, thereby calculating the contrast evaluation value D(k, i, j) for each region (i, j) of the images before the wavelet transformation. The contrast evaluation unit 360 outputs the calculated contrast evaluation value D(k, i, j) to the in-focus evaluation unit 370.
  • In step S507, the in-focus evaluation unit 370 evaluates an in-focus state based on the contrast evaluation value D(k, i, j), as in the fourth embodiment, and outputs the depth information of each pixel to the 3D shape estimation unit 380. In step S508, the 3D shape estimation unit 380 performs optimization such as smoothing of the depth information based on the depth information, estimates the 3D shape of the object, and outputs the estimated 3D shape of the object to the image synthesis unit 390. In step S509, the image synthesis unit 390 synthesizes the plurality of images at different focus positions based on the 3D shape of the object and the plurality of images, thereby creating a synthesized image.
  • According to this modification as well, the same effect as in the fourth embodiment can be obtained.
  • Fifth Embodiment
  • The fifth embodiment of the present invention will be described. Points of difference from the fourth embodiment will be explained here. The same reference numbers denote the same parts, and a description thereof will be omitted. The fourth embodiment shows a microscope system 400 comprising the image processing system 300 according to the fourth embodiment.
  • FIG. 13 shows the outline of an example of the configuration of the microscope system 400 according to this embodiment. As shown in FIG. 13, the microscope system 400 includes a microscope 210 and the image processing system 400 according to the fourth embodiment. The microscope 210 is, for example, a digital microscope. The microscope 210 includes an LED light source 211, an illumination optical system 212, an optical path control element 213, an objective lens 214, a sample surface 215 placed on a stage (not shown), an observation optical system 218, an imaging plane 219, an imaging unit 220, and a controller 222. The observation optical system 218 includes a zoom optical system 216 and an imaging optical system 217. The objective lens 214, the optical path control element 213, the zoom optical system 216, and the imaging optical system 217 are arranged in this order on the observation optical path from the sample surface 215 to the imaging plane 219.
  • Illumination light emitted by the LED light source 211 enters the optical path control element 213 via the illumination optical system 212. The optical path control element 213 reflects the illumination light toward the objective lens 214 on the observation optical path. The illumination light irradiates a sample placed on the sample surface 215 via the objective lens 214.
  • When irradiated with the illumination light, the sample generates observation light. The observation light is reflected light, fluorescence, or the like. The observation light enters the optical path control element 213. Unlike the illumination light, the optical path control element 213 passes the observation light and makes it enter the observation optical system 218 including the zoom optical system 216 and the imaging optical system 217. The optical path control element 213 is an optical element that reflects or passes incident light in accordance with its characteristic. As the optical path control element 213, for example, a polarizer such as a wire grid or a polarizing beam splitter (PBS), which reflects or passes incident light in accordance with its polarization direction is used. Note that as the optical path control element 213, for example, a dichroic mirror that reflects or passes incident light in accordance with its frequency may be used.
  • The observation optical system 218 condenses the observation light on the imaging plane 219, and forms an image of the sample on the imaging plane 219. The imaging unit 220 generates an image signal based on the image formed on the imaging plane 219, and outputs the image signal as a microscopic image to the image acquisition unit 310. The controller 222 controls the operations of the microscope 210. In this embodiment, the microscope 210 acquires a plurality of microscopic images of a single sample captured on different focal planes. Hence, the controller 222 causes the imaging unit 220 to acquire the image of the sample on each focal plane while controlling the optical system of the microscope 210 to gradually change the focal plane. More specifically, for example, the controller 222 causes the imaging unit 220 to acquire each image while changing the height of the stage, or the position of the height of the objective lens of the microscope 210. The controller 222 outputs the acquired images and the information about the focal position which is associated with the images to the image acquisition unit 310.
  • The operation of the microscope system 400 according to this embodiment will be described. The sample is placed on the stage (not shown) resulting that the sample surface 215 is set. The controller 222 controls the microscope 210. The controller 222 gradually changes the focal position of the optical system for the sample by, for example, gradually changing the position of the sample surface 215 in the optical axis direction. More specifically, for example, the controller 222 changes the height of the stage, the height of the objective lens, or the position of the focus lens of the microscope 210. At this time, the controller 222 causes the imaging unit 220 to sequentially acquire the microscopic image of the sample on each focal position. The image acquisition unit 310 acquires a microscopic image of a sample at each focus position from the imaging unit 220. The image acquisition unit 310 also acquires, from the controller 222, the focus position at the time of capture of each image. The image acquisition unit 310 stores the acquired microscopic image in a storage unit 314 in association with the focus position.
  • Processing of creating a synthesized image by synthesizing a plurality of images at different focus positions based on the microscopic image stored in the storage unit 314 is the same as that of the fourth embodiment. In this embodiment, the microscope system 400 creates a synthesized image, for example, a 3D reconstructed image or an all-in-focus image concerning the microscopic image. An image synthesis unit 390 outputs the created synthesized image to, for example, a display unit to display it or a storage device to store it. According to the 3D reconstructed image or all-in-focus image, the user can easily recognize an object image having a depth larger than the depth of field, like a general microscopic image.
  • As described above, for example, the illumination optical system 212, the optical path control element 213, the objective lens 214, the observation optical system 218, and the like function as a microscope optical system. For example, the imaging unit 220 functions as an imaging unit configured to acquire an image of a sample via the microscope optical system as a sample image.
  • In general, the image enlargement ratio of the optical system of a microscope is higher than that of the optical system of a digital camera. For this reason, the band of the optical system of the microscope is sometimes not so higher than the sampling band of the image sensor of the camera upon micrography. The band of the optical system can change depending on the numerical aperture, magnification, and the like of the optical system. For example, when the microscope has an optical zoom system, the band of the optical system changes as well. According to this embodiment, the statistical information calculation unit 340 calculates a statistical information value in consideration of the frequency band of the image. The weighting factor calculation unit 350 calculates the weighting factor based on the statistical information value. That is, since the contrast evaluation unit 360 determines the contrast evaluation value based on the evaluation value calculated by the band characteristics evaluation unit 330 and the weighting factor calculated in consideration of the frequency band of the image, an accurate contrast evaluation value can be determined. This allows the microscope system 400 to accurately create the 3D reconstructed microscopic image or all-in-focus microscopic image. If the optical system of the microscope 310 includes an optical zoom system, the numerical apertures changes depending on the focal length of the optical zoom system, and the band of the microscopic image accordingly changes. For this reason, the embodiment is particularly effective.
  • Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.

Claims (11)

1. An image processing system comprising:
an image acquisition unit configured to acquire a plurality of images obtained by capturing a single object at different focus positions;
a band characteristics evaluation unit configured to calculate, for each of regions of the plurality of images, a band evaluation value of a band included in the images for each of a plurality of frequency bands;
a statistical information calculation unit configured to calculate, for each of the plurality of the frequency bands, statistical information using the band evaluation values of at least two focus positions;
a weighting factor calculation unit configured to calculate, for each of the plurality of the frequency bands, weighting factors corresponding to the band evaluation values based on the statistical information;
a contrast evaluation unit configured to calculate contrast evaluation values for each of the regions in the plurality of images based on the band evaluation values and the weighting factors; and
an in-focus evaluation unit configured to select an in-focus region out of the regions of the plurality of images based on the contrast evaluation values.
2. The image processing system according to claim 1, wherein the band evaluation values are amounts corresponding to amplitude in each of the frequency bands.
3. The image processing system according to claim 1, wherein
the statistical information calculation unit further calculates the statistical information for each of the regions,
the weighting factor calculation unit calculates the weighting factors for each of the regions, and
the contrast evaluation unit calculates the contrast evaluation values based on the band evaluation values of each of the regions and the weighting factors of each of the regions.
4. The image processing system according to claim 1, wherein
the statistical information is an average of the band evaluation values corresponding to the plurality of images at the different focus positions.
5. The image processing system according to claim 1, wherein
the statistical information is a relative value obtained by dividing an average of the band evaluation values corresponding to the plurality of images at the different focus positions by a sum of the averages for at least one of the frequency bands.
6. The image processing system according to claim 1, wherein
the statistical information is a relative value obtained by dividing an average of the band evaluation values corresponding to the plurality of images at the different focus positions by a sum of the averages for all of the frequency bands.
7. The image processing system according to claim 1, wherein
the statistical information is a variation of the band evaluation values corresponding to the plurality of images at the different focus positions.
8. The image processing system according to claim 1, wherein
the statistical information is a relative value obtained by dividing a variation of the band evaluation values corresponding to the plurality of images at the different focus positions by a sum of the variations for at least one of the frequency bands.
9. The image processing system according to claim 1, wherein
the statistical information is a relative value obtained by dividing a variation of the band evaluation values corresponding to the plurality of images at the different focus positions by a sum of the variations for all of the frequency bands.
10. A microscope system comprising:
a microscope optical system;
an imaging unit configured to acquire an image of a sample via the microscope optical system as a sample image; and
the image processing system of claim 1 which is configured to acquire the sample image as the image.
11. A microscope system according to claim 10,
wherein the optical system includes a variable magnification optical system.
US15/338,852 2012-03-28 2016-10-31 Image processing system and microscope system including the same Abandoned US20170046846A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/338,852 US20170046846A1 (en) 2012-03-28 2016-10-31 Image processing system and microscope system including the same

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2012-075081 2012-03-28
JP2012-073354 2012-03-28
JP2012073354A JP5914092B2 (en) 2012-03-28 2012-03-28 Image processing system and microscope system including the same
JP2012075081A JP5868758B2 (en) 2012-03-28 2012-03-28 Image processing system and microscope system including the same
US13/788,526 US9509977B2 (en) 2012-03-28 2013-03-07 Image processing system and microscope system including the same
US15/338,852 US20170046846A1 (en) 2012-03-28 2016-10-31 Image processing system and microscope system including the same

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/788,526 Division US9509977B2 (en) 2012-03-28 2013-03-07 Image processing system and microscope system including the same

Publications (1)

Publication Number Publication Date
US20170046846A1 true US20170046846A1 (en) 2017-02-16

Family

ID=49234442

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/788,526 Active 2035-01-06 US9509977B2 (en) 2012-03-28 2013-03-07 Image processing system and microscope system including the same
US15/338,852 Abandoned US20170046846A1 (en) 2012-03-28 2016-10-31 Image processing system and microscope system including the same

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/788,526 Active 2035-01-06 US9509977B2 (en) 2012-03-28 2013-03-07 Image processing system and microscope system including the same

Country Status (1)

Country Link
US (2) US9509977B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10969562B2 (en) * 2017-10-18 2021-04-06 Olympus Corporation Observation device and focus adjustment method
US11796785B2 (en) 2018-05-01 2023-10-24 Nanotronics Imaging, Inc. Systems, devices and methods for automatic microscope focus

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9509977B2 (en) * 2012-03-28 2016-11-29 Olympus Corporation Image processing system and microscope system including the same
EP2999988A4 (en) 2013-05-23 2017-01-11 S.D. Sight Diagnostics Ltd. Method and system for imaging a cell sample
CN107077732B (en) * 2014-08-27 2020-11-24 思迪赛特诊断有限公司 System and method for calculating focus variation for digital microscope
CN110769731B (en) * 2017-06-15 2022-02-25 奥林巴斯株式会社 Endoscope system, processing system for endoscope, and image processing method
EP3462408A1 (en) * 2017-09-29 2019-04-03 Thomson Licensing A method for filtering spurious pixels in a depth-map
JP7121506B2 (en) * 2018-03-14 2022-08-18 株式会社日立ハイテク SEARCHING DEVICE, SEARCHING METHOD AND PLASMA PROCESSING DEVICE
CN110674604B (en) * 2019-09-20 2022-07-08 武汉大学 Transformer DGA data prediction method based on multi-dimensional time sequence frame convolution LSTM

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4136011B2 (en) 1996-04-30 2008-08-20 オリンパス株式会社 Depth of focus extension device
JP3961729B2 (en) 1999-03-03 2007-08-22 株式会社デンソー All-focus imaging device
US7711510B2 (en) * 2002-10-24 2010-05-04 Lecroy Corporation Method of crossover region phase correction when summing signals in multiple frequency bands
US8259216B2 (en) * 2007-08-29 2012-09-04 Panasonic Corporation Interchangeable lens camera system having autofocusing function
JP2010166247A (en) 2009-01-14 2010-07-29 Olympus Corp Image processor, image processing program, and image processing method
JP5418777B2 (en) * 2010-02-19 2014-02-19 富士ゼロックス株式会社 Image processing apparatus and image processing program
US20130315462A1 (en) * 2010-10-26 2013-11-28 Christian Wachinger Use of a Two-Dimensional Analytical Signal in Sonography
US9942534B2 (en) * 2011-12-20 2018-04-10 Olympus Corporation Image processing system and microscope system including the same
US9509977B2 (en) * 2012-03-28 2016-11-29 Olympus Corporation Image processing system and microscope system including the same

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10969562B2 (en) * 2017-10-18 2021-04-06 Olympus Corporation Observation device and focus adjustment method
US11796785B2 (en) 2018-05-01 2023-10-24 Nanotronics Imaging, Inc. Systems, devices and methods for automatic microscope focus
TWI827841B (en) * 2018-05-01 2024-01-01 美商奈米創尼克影像公司 Systems, devices and methods for automatic microscope focus

Also Published As

Publication number Publication date
US20130258058A1 (en) 2013-10-03
US9509977B2 (en) 2016-11-29

Similar Documents

Publication Publication Date Title
US9509977B2 (en) Image processing system and microscope system including the same
JP6214183B2 (en) Distance measuring device, imaging device, distance measuring method, and program
KR101233013B1 (en) Image photographing device, distance computing method for the device, and focused image acquiring method
US9942534B2 (en) Image processing system and microscope system including the same
US9992478B2 (en) Image processing apparatus, image pickup apparatus, image processing method, and non-transitory computer-readable storage medium for synthesizing images
US8983221B2 (en) Image processing apparatus, imaging apparatus, and image processing method
EP3371741B1 (en) Focus detection
JP5635844B2 (en) Focus adjustment apparatus and imaging apparatus
JP5374119B2 (en) Distance information acquisition device, imaging device, and program
Shih Autofocus survey: a comparison of algorithms
JP2013084152A (en) Method for generating omnifocal image, omnifocal image generating device, omnifocal image generating program, method for acquiring subject height information, subject height information acquiring device, and subject height information acquiring program
CN114424102A (en) Image processing apparatus and method for use in an autofocus system
JP2017010095A (en) Image processing apparatus, imaging device, image processing method, image processing program, and recording medium
JP6317635B2 (en) Image processing apparatus, image processing method, and image processing program
JP6239985B2 (en) Image processing apparatus, image processing program, and imaging apparatus
JP5965638B2 (en) Microscope system with image processing system
JP5369729B2 (en) Image processing apparatus, imaging apparatus, and program
JP5846895B2 (en) Image processing system and microscope system including the same
JP5914092B2 (en) Image processing system and microscope system including the same
US20160162753A1 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
JP2014044117A (en) Distance information acquisition device, imaging apparatus, distance information acquisition method, and program
JPH10170817A (en) Method and device for calculating focal position, and electron microscope using the same
JP2017130167A (en) Image processing device, imaging device, and image processing program
JP5868758B2 (en) Image processing system and microscope system including the same
Wendland Shape from focus image processing approach based 3D model construction of manufactured part

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION