WO2015190013A1 - 画像処理装置、撮像装置、顕微鏡システム、画像処理方法及び画像処理プログラム - Google Patents
画像処理装置、撮像装置、顕微鏡システム、画像処理方法及び画像処理プログラム Download PDFInfo
- Publication number
- WO2015190013A1 WO2015190013A1 PCT/JP2014/081787 JP2014081787W WO2015190013A1 WO 2015190013 A1 WO2015190013 A1 WO 2015190013A1 JP 2014081787 W JP2014081787 W JP 2014081787W WO 2015190013 A1 WO2015190013 A1 WO 2015190013A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- shading
- shading component
- region
- luminance
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 101
- 238000003384 imaging method Methods 0.000 title claims description 164
- 238000003672 processing method Methods 0.000 title claims description 6
- 238000004364 calculation method Methods 0.000 claims abstract description 97
- 238000003702 image correction Methods 0.000 claims abstract description 60
- 238000003705 background correction Methods 0.000 claims abstract description 47
- 230000003287 optical effect Effects 0.000 claims description 47
- 230000000007 visual effect Effects 0.000 claims description 26
- 230000001186 cumulative effect Effects 0.000 claims description 5
- 235000019557 luminance Nutrition 0.000 description 108
- 238000010586 diagram Methods 0.000 description 46
- 238000005286 illumination Methods 0.000 description 24
- 238000000034 method Methods 0.000 description 24
- 230000014509 gene expression Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000012937 correction Methods 0.000 description 9
- 238000012935 Averaging Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 7
- 230000004048 modification Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 239000002131 composite material Substances 0.000 description 3
- 230000012447 hatching Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/0004—Microscopes specially adapted for specific applications
- G02B21/002—Scanning microscopes
- G02B21/0024—Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
- G02B21/0032—Optical details of illumination, e.g. light-sources, pinholes, beam splitters, slits, fibers
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/06—Means for illuminating specimens
- G02B21/08—Condensers
- G02B21/082—Condensers for incident illumination only
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/24—Base structure
- G02B21/248—Base structure objective (or ocular) turrets
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/24—Base structure
- G02B21/26—Stages; Adjusting means therefor
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
- G02B21/362—Mechanical details, e.g. mountings for the camera or image sensor, housings
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
- G02B21/365—Control or image processing arrangements for digital or video microscopes
- G02B21/367—Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/02—Objectives
- G02B21/025—Objectives with variable magnification
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/34—Microscope slides, e.g. mounting specimens on microscope slides
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B5/00—Optical elements other than lenses
- G02B5/005—Diaphragms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present invention relates to an image processing apparatus, an imaging apparatus, a microscope system, an image processing method, and an image processing program that perform image processing on an image acquired by imaging a specimen or the like.
- the virtual slide technique In recent years, a so-called virtual slide technique is known in which an image obtained by copying a specimen placed on a slide glass is recorded as electronic data so that the user can observe the image on a monitor such as a personal computer.
- a high-resolution image in which the entire specimen is captured is constructed by sequentially pasting together partial images of the specimen magnified by a microscope.
- the virtual slide technique is a technique for acquiring a plurality of images with different fields of view for the same subject and connecting the images to generate an image with an enlarged field of view for the subject.
- the microscope includes a light source for illuminating the specimen and an optical system for enlarging the specimen image.
- An image sensor for converting an enlarged sample image into electronic data is provided at the subsequent stage of the optical system.
- This unevenness of brightness is called shading and usually becomes darker as the distance from the center of the image corresponding to the position of the optical axis of the optical system increases. For this reason, when a virtual slide image is created by pasting together a plurality of images, an unnatural boundary occurs between the images. Further, since shading is repeated by pasting a plurality of images, it looks as if a periodic pattern exists on the specimen.
- a shading correction technique in which a shading pattern is acquired in advance as a calibration image, and an image showing a sample is corrected based on the calibration image.
- imaging is performed in a state in which a specimen is retracted outside the angle of view of the optical system during transmission illumination observation, and a reflecting member is disposed within the angle of view of the optical system during observation of incident illumination.
- a shading correction technique is disclosed that uses an image acquired by performing imaging in the above state as a calibration image.
- Patent Document 2 discloses a method for acquiring shading correction data by performing imaging using a uniform fluorescent sample as a calibration sample during fluorescence observation.
- a reference visual field image that is an image of a predetermined visual field range of a sample is captured, and the position of the sample is moved relative to the optical system to include a predetermined region within the predetermined visual field range.
- a technique is disclosed in which a plurality of peripheral visual field images, which are images of peripheral visual field ranges different from the visual field range, are taken, and the correction gain of each pixel of the reference visual field image is calculated based on the reference visual field image and the peripheral visual field image.
- Patent Document 1 is provided with a dedicated function for acquiring a calibration image, such as a drive mechanism for retracting a specimen.
- a dedicated function for acquiring a calibration image such as a drive mechanism for retracting a specimen.
- Patent Document 3 no consideration is given to the number of times of imaging by moving the sample or reducing the time required for shading correction performed for each combination of the reference visual field image and the peripheral visual field image.
- the present invention has been made in view of the above, and provides an image processing apparatus, an imaging apparatus, a microscope system, an image processing method, and an image processing program capable of performing shading correction in a short time and with high accuracy with a simple configuration.
- the purpose is to provide.
- an image processing apparatus provides an object in at least one other image in each of first and second directions different from each other.
- An image acquisition unit that acquires first and second image groups including a plurality of images that share a common part, and a flatness in which a shading component is constant in one image for each of the first and second image groups
- a shading component calculation unit that calculates, as the shading component, the ratio of the luminance in a region in another image in which a subject common to the region is captured to the luminance in the region including the region, and using the shading component
- An image correction unit that performs shading correction on the area of the image, and the shading component is normalized shading based on luminance in the flat area And a non-normalized shading component based on luminance in a region other than the flat region, and the image correction unit performs the shading correction based on the normalized shading component and the non-normalized shading component. It is characterized by performing.
- the shading component calculation unit includes, based on the first image group, in the region including the flat region of the first image and aligned in the second direction with respect to the flat region. Based on the luminance, the first shading component calculation unit that calculates the shading component and the second image group include the flat area of the first image, and the first area includes the flat area. And a second shading component calculation unit that calculates the shading component based on luminance in a region aligned in one direction.
- the image correction unit performs shading correction on the first region, which is the region where the normalized shading component is calculated, among the regions in the image using the normalized shading component.
- a second image correction unit that performs the shading correction using a component and the normalized shading component calculated for a region that is used as a reference when calculating the non-normalized shading component.
- the second image correction unit calculates from the non-normalized shading component calculated from one of the first and second image groups and the other of the first and second image groups.
- the shading correction is performed using the normalized normalized shading component.
- the shading component calculation unit may calculate the luminance in the partial area and the partial area when the shading component regarding the partial area in the one image is known.
- a shading component relating to a region in the other image is calculated using brightness in the region in the other image in which a common subject is captured and the known shading component.
- the shading component calculation unit may determine a plurality of the other images based on a plurality of combinations of the first image and the other image for each of the first and second image groups. A shading component relating to a region in the image is calculated, and the calculated plurality of shading components are added and averaged.
- the shading component calculation unit may include a plurality of combinations of the first image and the other image for each of the first and second image groups, The shading component is calculated based on a plurality of combinations having different texture components in a common area, which is an area in which a common subject is captured with the other image.
- the shading component calculation unit cumulatively adds the luminance in the corresponding region between the plurality of images for each of the first and second image groups, and uses the cumulative addition value. Then, the shading component is calculated.
- the shading component calculation unit calculates a plurality of shading components based on the plurality of combinations, and averages the plurality of shading components.
- the shading component calculation unit includes two non-normalized shading components calculated based on the first and second image groups for the region in the image, and the two non-normalized shading components.
- a shading component in a region in the image is calculated using two normalized shading components respectively calculated for two regions used as a reference when calculating the normalized shading component.
- the image processing apparatus further includes a flat area search unit that searches for the flat area based on a luminance gradient of pixels included in an area in which a common subject is captured between the first image and the other image. It is characterized by that.
- the first and second directions are orthogonal to each other.
- the imaging apparatus moves the at least one of the image processing device, the optical system that generates the image of the subject, and the subject and the optical system, thereby reducing the field of view of the optical system with respect to the subject.
- the image acquisition unit picks up an image on the imaging means while moving the visual field in the first and second directions respectively.
- the first image group and the second image group are acquired by performing control to execute the above.
- the microscope system according to the present invention includes the imaging device and a stage on which the subject is placed, and the moving unit moves at least one of the stage and the optical system.
- the image processing apparatus further includes a virtual slide creation unit that creates a virtual slide image by pasting together images having the same visual field included in the first and second image groups. It is characterized by.
- the image processing method includes a first image and a second image including a plurality of images in which a part of a subject is common to at least another image in first and second directions different from each other.
- An image acquisition step for acquiring each group, and for each of the first and second image groups, a subject common to the region with respect to luminance in a region including a flat region having a constant shading component in one image
- a shading component calculation step for calculating a luminance ratio in a region in another image in which the image is captured as a shading component, and an image correction step for performing shading correction on the region in the image using the shading component.
- the shading component includes a normalized shading component based on luminance in the flat region and a region other than the flat region.
- a non-normalized shading components relative to the luminance of the image correction step performs the shading correction on the basis of the normalized shading component and the non-normalized shading component, characterized in that.
- the image processing program includes a first image and a second image including a plurality of images in which a part of a subject is common to at least another image in first and second directions different from each other.
- An image acquisition step for acquiring each group, and for each of the first and second image groups, a subject common to the region with respect to luminance in a region including a flat region having a constant shading component in one image
- a shading component calculation step for calculating a luminance ratio in a region in another image in which the image is captured as a shading component, and an image correction step for performing shading correction on the region in the image using the shading component.
- the computer executes the shading component, the normalized shading component based on the luminance in the flat region
- the image correction step performs the shading correction based on the normalized shading component and the non-normalized shading component. It is characterized by that.
- the first and second image groups in which a part of the subject is in common with at least another one image in the first and second directions are acquired, respectively.
- the normalized shading component and the denormalized shading component are calculated based on the luminance in the region including the flat region, and the normalized shading component and the denormalized shading component are used to calculate the inside of the image. Therefore, the number of images required for calculating the shading component can be reduced, and the shading correction can be performed with high accuracy in a short time with high accuracy.
- FIG. 1 is a block diagram showing a configuration example of an image processing apparatus according to Embodiment 1 of the present invention.
- FIG. 2 is a schematic diagram for explaining the operation of the image acquisition unit shown in FIG.
- FIG. 3 is a schematic diagram for explaining the principle of image processing executed by the image processing unit shown in FIG.
- FIG. 4 is a flowchart showing the operation of the image processing apparatus shown in FIG.
- FIG. 5 is a schematic diagram for explaining the amount of movement by which the imaging field of view is moved every time imaging is performed.
- FIG. 6 is a schematic diagram for explaining an imaging method of a subject.
- FIG. 7 is a schematic diagram for explaining an imaging method of a subject.
- FIG. 8 is a schematic diagram illustrating five images acquired by performing imaging five times while moving the imaging field of view in the horizontal direction.
- FIG. 9 is a schematic diagram showing shading components in the horizontal direction stored in the storage unit shown in FIG.
- FIG. 10 is a schematic diagram showing shading components in the vertical direction stored in the storage unit shown in FIG.
- FIG. 11 is a schematic diagram for explaining image correction processing executed by the image correction unit shown in FIG.
- FIG. 12 is a flowchart showing in detail an image correction process executed by the image correction unit shown in FIG.
- FIG. 13 is a schematic diagram for explaining another example of the shading correction process performed by the second image correction unit illustrated in FIG. 1.
- FIG. 14 is a schematic diagram for explaining an image capturing method used for calculation of a shading component in Embodiment 2 of the present invention.
- FIG. 15 is a schematic diagram for explaining a shading component calculation method according to Embodiment 2 of the present invention.
- FIG. 16 is a schematic diagram for explaining a shading component calculation method according to Embodiment 3 of the present invention.
- FIG. 17 is a schematic diagram for explaining a shading component calculation method according to Embodiment 4 of the present invention.
- FIG. 18 is a block diagram showing a configuration example of an image processing apparatus according to Embodiment 7 of the present invention.
- FIG. 19 is a schematic diagram showing a horizontal image created by the flat area search unit shown in FIG.
- FIG. 20 is a schematic diagram showing a vertical image created by the flat area search unit shown in FIG.
- FIG. 21 is a schematic diagram showing the horizontal image shown in FIG. 19 in units of pixels.
- FIG. 22 is a schematic diagram showing the vertical image shown in FIG. 20 in units of pixels.
- FIG. 23 is a diagram schematically illustrating a shading component in the horizontal direction stored in the storage unit illustrated in FIG.
- FIG. 24 is a diagram schematically showing the shading components in the vertical direction stored in the storage unit shown in FIG.
- FIG. 25 is a diagram illustrating a configuration example of a microscope system according to the eighth embodiment of the present invention.
- FIG. 26 is a schematic diagram for explaining an operation of acquiring a plurality of images in the eighth embodiment of the present invention.
- FIG. 1 is a block diagram showing a configuration example of an image processing apparatus according to Embodiment 1 of the present invention.
- the image processing apparatus 1 according to Embodiment 1 includes an image acquisition unit 11 that acquires an image of a subject to be observed, an image processing unit 12 that performs image processing on the image, And a storage unit 13.
- the image acquisition unit 11 acquires a plurality of images having different imaging fields of view for the subject.
- the image acquisition unit 11 may acquire such a plurality of images directly from the imaging device, or may acquire via a network or a storage device. In the first embodiment, it is assumed that the image acquisition unit 11 directly acquires an image from the imaging device.
- the kind of imaging device is not specifically limited, For example, the microscope apparatus provided with the imaging function may be sufficient and a digital camera may be sufficient.
- FIG. 2 is a schematic diagram for explaining the operation of the image acquisition unit 11, and shows the optical system 30 of the imaging apparatus, the subject SP, and the imaging field of view V of the optical system 30.
- the position of the optical system 30 is shifted from the front side of the subject SP and the imaging field of view V, and the side surface of the optical system 30 is placed outside the subject SP.
- the figure shows the positional relationship with the imaging field of view V.
- the direction parallel to one side of the imaging field of view V (the left-right direction in FIG. 2) is defined as the horizontal direction, and the direction orthogonal to the one side (up and down direction in FIG. ) Is the vertical direction.
- the image acquisition unit 11 includes an imaging control unit 111 that controls the imaging operation of the imaging apparatus, and a drive control unit 112 that performs control to change the position of the imaging field of view V with respect to the subject SP.
- the drive control unit 112 changes the position of the imaging field of view V with respect to the subject SP by relatively moving either or both of the optical system 30 and the subject SP.
- the imaging control unit 111 causes the imaging device to perform imaging at a predetermined timing, and captures an image M in which the subject in the imaging field of view V is captured from the imaging device.
- the imaging field of view V is moved in two directions of a horizontal direction and a vertical direction orthogonal to each other.
- the moving direction of the imaging field of view V is two different directions, the horizontal direction And it is not limited to the vertical direction. Further, the two directions in which the imaging field of view V is moved are not necessarily orthogonal.
- FIG. 3 is a schematic diagram for explaining the principle of image processing executed by the image processing unit 12.
- the coordinates (x, y) shown in FIGS. 3A to 3C indicate the position of each pixel constituting the image M.
- the image M acquired by the imaging device has a non-uniform illuminance of the light source, non-uniformity of the optical system, and non-uniformity of the characteristics of the imaging element, except for a partial region at the center. Unevenness of brightness or color unevenness caused by the above or the like occurs. This lightness unevenness and color unevenness is called shading.
- the luminance I (x, y), the texture component T (x, y), and the shading component S (x, y) are the luminance, texture component, and shading component at each color signal at each coordinate.
- the image processing unit 12 uses the plurality of images acquired by the image acquisition unit 11 to execute image processing for correcting shading generated in the image.
- the image processing unit 12 includes a shading component calculation unit 121 that calculates a shading component generated in the image M (x, y), and an image correction unit 122 that performs shading correction using the shading component. .
- the shading component calculation unit 121 includes a first direction shading component calculation unit 121a and a second direction shading component calculation unit 121b.
- the first direction shading component calculation unit 121a calculates a shading component from a plurality of images acquired by moving the imaging field of view V for the subject SP in a first direction (for example, the horizontal direction).
- the second direction shading component calculation unit 121b calculates a shading component from a plurality of images acquired by moving the imaging field of view V for the subject SP in the second direction (for example, the vertical direction).
- the image correction unit 122 includes a first image correction unit 122a and a second image correction unit 122b.
- the first image correction unit 122a applies the shading component calculated by the first direction shading component calculation unit 121a and the second direction shading component calculation unit 121b to a partial area in the image acquired by the image acquisition unit 11.
- the shading correction is performed using any one of the shading components calculated by the above.
- the second image correction unit 122b performs the shading component calculated by the first direction shading component calculation unit 121a and the second direction shading on the region of the image that has not been corrected by the first image correction unit 122a.
- Shading correction is performed using both the shading component calculated by the component calculation unit 121b. A region to be corrected by the first image correction unit 122a and the second image correction unit 122b and specific correction processing will be described later.
- the storage unit 13 is composed of a storage device such as a flash memory, a RAM, and a semiconductor memory such as a ROM, which can be updated and recorded.
- the storage unit 13 stores various parameters used by the image acquisition unit 11 to control the imaging apparatus, image data of an image subjected to image processing by the image processing unit 12, various parameters calculated by the image processing unit 12, and the like.
- the image acquisition unit 11 and the image processing unit 12 may be configured by dedicated hardware, or may be configured by a CPU and a program that causes the CPU to execute predetermined processing. In the latter case, an image processing program for causing the image acquisition unit 11 and the image processing unit 12 to execute predetermined processing, and various parameters and setting information used during the execution of these programs are stored in the storage unit 13. Also good.
- a storage device including a recording medium such as a hard disk, an MO, a CD-R, a DVD-R, and a writing / reading device for writing / reading information to / from the recording medium is subjected to image processing via a data communication terminal.
- the image processing program and parameters may be stored in the storage device connected to the device 1.
- FIG. 4 is a flowchart showing the operation of the image processing apparatus 1.
- an image showing the subject SP shown in FIG. 2 is acquired, and correction processing is performed on the image.
- step S1 the image acquisition unit 11 acquires a plurality of images generated by imaging the subject SP while moving the imaging field of view V in two different directions by a predetermined amount.
- the drive control unit 112 moves the imaging field of view V in a predetermined direction by moving either the subject SP or the optical system 30, and the imaging control unit 111 is at least in the moving direction of the imaging field of view V. Control is performed so that a part of the imaging visual field V overlaps with another image. In the following, the imaging field of view V is moved in the horizontal direction and the vertical direction, respectively.
- FIG. 5 is a schematic diagram for explaining the amount of movement for moving the imaging field of view V every time imaging is performed.
- each block into which the imaging field of view V is divided is determined according to the size of a flat area (described later) in which shading is hardly caused in the image and the shading component can be regarded as constant, the required shading correction accuracy, and the like. can do.
- the number of divisions of the imaging visual field V in the horizontal direction and the vertical direction may be the same or different.
- the coordinates of each block in the image are indicated by (X, Y). In the first embodiment, 1 ⁇ X ⁇ 5 and 1 ⁇ Y ⁇ 5.
- 6 and 7 are schematic diagrams for explaining a method of imaging the subject SP. 6 and 7, for the sake of convenience, the position of the optical system 30 is shifted from the front side of the subject SP and the imaging field of view V in order to clarify the position of the imaging field V of the subject SP, and the optical system 30 is located outside the subject SP. These figures show the positional relationship with the imaging field of view V at each position.
- FIG. 6 the image acquisition unit 11 executes imaging every time the imaging field of view V moves in the horizontal direction by the length Bw, so that one of the imaging fields of view V between the immediately preceding image is generated.
- FIG. 8 is a schematic diagram showing five images M 0 to M 4 acquired by performing imaging five times while moving the imaging visual field V in the horizontal direction.
- the images M 0 to M 4 are arranged vertically along the imaging order, and are shifted to the left and right so that blocks having the same texture component are aligned vertically.
- the position of the subject SP may be fixed and the optical system 30 side may be moved, or the position of the optical system 30 may be fixed and the subject SP. You may move the side. Alternatively, both the subject SP and the optical system 30 may be moved in directions opposite to each other.
- the subject in the imaging field of view V All or part of the image may include the same image, or all or part of the subject may not include the same image at all.
- the shading component calculation unit 121 takes in a plurality of images acquired in step S1, and uses these images to calculate a shading component for each direction in the horizontal direction and the vertical direction.
- the calculation of the shading component is normally performed on the basis of the luminance in a region where the shading component hardly occurs in the image and the change of the shading component is hardly observed (hereinafter referred to as a flat region).
- the shading component is divided by dividing the luminance of each pixel included in the calculation target region of the shading component by the luminance of each pixel included in the flat region having a texture component common to the calculation target region. Is obtained.
- the shading component is calculated for each column or each row on the basis of the luminance in the column or row of the block including the flat region.
- a common area is generated in units of columns in the horizontal direction between images acquired by moving the imaging field of view V one block at a time in the horizontal direction. Therefore, in this case, the shading component is calculated based on the luminance in the block of the column that includes the flat region and is arranged in the direction perpendicular to the flat region.
- the shading component (luminance ratio) calculated based on the common region generated in the horizontal direction is also referred to as a shading component in the horizontal direction.
- the shading component is calculated based on the luminance in the block of the row including the flat region and arranged in the horizontal direction with respect to the flat region.
- the shading component luminance ratio
- the processing is performed on the assumption that a flat region exists in the center portion of the image and the shading component changes concentrically. Specifically, among the blocks (1, 1) to (5, 5) obtained by dividing the image M, the central block (3, 3) is a flat region.
- the first direction shading component calculation unit 121a extracts a column including the block (3, 3) of the flat region from one image among the images M 0 to M 4 shown in FIG. 8, and from another image, A block (that is, a common area) in which the same subject as the block is captured is extracted, and a shading component in the horizontal direction is calculated using the luminance of pixels corresponding to positions between the blocks extracted from both images.
- the shading component Sh (1,1) is represented as a shading component at an arbitrary pixel in the blocks (1,1), (1,2), (1,3), (1,4), (1,5).
- Sh (1,2), Sh (1,3), Sh (1,4), Sh (1,5) is expressed as luminance H 0 (1 , 1), H 0 (1,2), H 0 (1,3), H 0 (1,4), H 0 (1,5).
- the luminance of an arbitrary pixel in the blocks (3, 1), (3, 2), (3, 3), (3,4), (3, 5) in the third column of the image M 2 is expressed as luminance.
- the shading components Sh (1,1) to Sh (1,5) in arbitrary pixels in the blocks (1,1) to (1,5) are given by the following equations (1a) to (1e). It is done.
- equations (1a) to (1e) the shading component in any pixel in each block corresponds to the luminance of the pixel in the block described in the numerator on the right side, and the position in the block described in the denominator. It is given by dividing by the luminance of the pixel.
- equations (1a) to (1e) operations relating to pixels corresponding to positions in different blocks are comprehensively expressed in the form of an operation equation between blocks.
- FIG. 9 is a schematic diagram showing the shading component Sh in the horizontal direction stored in the storage unit 13.
- a row including the flat region at the center that is, a row used as a reference for calculating the shading component
- the shading components Sh (1, 3), Sh (2, 3), Sh (4, 3), and Sh (5, 3) are the luminances of pixels corresponding to positions in the block (3, 3) that are flat regions. Is calculated based on the above. Therefore, in the following, the shading component calculated using the luminance of the pixels in the block in the flat area is referred to as a normalized shading component.
- the shading component is the luminance of the pixel corresponding to the position in the block (3, 1), (3, 2), (3,4), (3, 5) other than the flat region in the third column.
- the shading component Sh (1,1) of the block (1,1) is calculated using the luminance H 2 (3,1) of the pixel in the block (3,1).
- the shading component calculated using the luminance of the pixels in the block other than the flat area is referred to as a non-normalized shading component.
- the second direction shading component calculation unit 121b performs the vertical direction based on five images acquired by imaging the subject SP five times while moving the imaging visual field V by the length Bh in the vertical direction.
- the shading component at is calculated. That is, a row including a flat area block (3, 3) is extracted from one of these five images, and a block (common area) in which the same subject as those blocks is captured from another image.
- the shading component in the vertical direction is calculated using the luminance of the pixels whose positions correspond between the blocks extracted from both images.
- FIG. 10 is a schematic diagram showing the shading component Sv in the vertical direction stored in the storage unit 13.
- hatched hatching is added to a line including a central flat region (that is, a line used as a reference for calculating a shading component).
- the shading components Sv (3, 1), Sv (3, 2), Sv (3, 4), and Sv (3, 5) are the luminance of the pixel corresponding to the position in the block (3, 3) which is a flat region. Is a normalized shading component calculated with reference to.
- the shading component is the luminance of the pixel corresponding to the position in the block (1, 3), (2, 3), (4, 3), (5, 3) other than the flat region in the third row. Is a non-normalized shading component calculated using.
- step S1 after acquiring an image by moving the imaging field of view in each of the horizontal and vertical directions in step S1, shading components in the horizontal and vertical directions are sequentially calculated in step S2.
- the order of processing is not limited to this.
- the shading component in the horizontal direction is calculated using the acquired image, and then the image is acquired by moving the imaging field in the vertical direction. Thereafter, the shading component in the vertical direction may be calculated using the acquired image.
- the calculation of the shading component in the horizontal direction and the acquisition of the image by moving the imaging field of view in the vertical direction may be performed in parallel. Further, each process in the vertical direction may be performed before each process in the horizontal direction.
- FIG. 11 is a schematic diagram for explaining the image correction processing executed by the image correction unit 122.
- the image M illustrated in FIG. the luminance of an arbitrary pixel in the block (X, Y) in the image M is denoted as H (X, Y).
- FIG. 12 is a flowchart showing in detail the image correction processing executed by the image correction unit 122.
- the first image correction unit 122a corrects the luminance of each pixel in the block from which the normalized shading component is obtained in the image M using the normalized shading component.
- the blocks from which the normalized shading component Sh (see FIG. 9) in the horizontal direction is obtained are (1,3), (2,3), (4,3), (5,3). )
- the blocks from which the normalized shading component Sv (see FIG. 10) in the vertical direction is obtained are (3, 1), (3, 2), (3,4), (3, 5). . Therefore, if the block in the flat region is (X 0 , Y 0 ), the block from which the normalized shading component is obtained can be expressed as (X, Y 0 ) or (X 0 , Y).
- the first image correction unit 122a normalizes the luminance H (X, Y 0 ) of any pixel in the block (X, Y 0 ) from which the normalized shading component Sh in the horizontal direction is obtained at this pixel position.
- the texture component T (X, Y 0 ) in the pixel is calculated (see Expression (2-1)).
- the first image correction unit 122a calculates the luminance H (X 0 , Y) of an arbitrary pixel in the block (X 0 , Y) from which the normalized shading component Sv in the vertical direction is obtained at this pixel position.
- the texture component T (X 0 , Y) in the pixel is calculated (see Expression (2-2)).
- the second image correction unit 122b corrects the luminance of each pixel in the block in which the normalized shading component is not obtained in the image M using the normalized shading component and the non-normalized shading component. To do.
- the shading component Sh (1,1) calculated for the pixel in the block (1,1) is common in the horizontal direction of the block (1,1) as shown in FIG. 11B and the equation (1a).
- a position in the block (3, 1) as a region is a denormalized shading component calculated with reference to the luminance H 2 (3, 1) of the corresponding pixel.
- the shading component included in the luminance H 2 (3,1) of the pixels in the block (3,1) is the same in the vertical direction of the block (3,1) as shown in FIG.
- the position in the block (3, 3) of the flat area which is the area is given as a normalized shading component Sv (3, 1) calculated on the basis of the luminance of the corresponding pixel. Therefore, the texture component T (1,1) at any pixel in the block (1,1) of the image M is the luminance H (1,1) of the pixel in the block (1,1) in the image M.
- the denormalized shading component Sh (1,1) at the pixel position and the normalized shading component Sv (3,1) at the corresponding pixel position in the block (3,1) the following equation (3) Given by.
- FIG. 13 is a schematic diagram for explaining another example of the shading correction process performed by the second image correction unit 122b.
- the position in the block (1, 3), which is the common area in the vertical direction of (1, 1), is a denormalized shading component calculated based on the luminance of the corresponding pixel.
- the shading component included in the luminance of the pixels in the block (1, 3) is a block in a flat area that is a common area in the horizontal direction of the block (1, 3).
- the position in (3, 3) is given as a normalized shading component Sh (1, 3) calculated on the basis of the luminance of the corresponding pixel. Therefore, the texture component T (1,1) at any pixel in the block (1,1) of the image M is the luminance H (1,1) of the pixel in the block (1,1) in the image M.
- the non-normalized shading component Sv (1, 1) at the pixel position and the normalized shading component Sh (1, 3) at the corresponding pixel position in the block (1, 3) the following equation (4) Given by.
- the texture component T (X, Y) in an arbitrary pixel in the block (X, Y) for which the normalized shading component is not obtained is the luminance of the pixel in the block (X, Y).
- H (X, Y) the denormalized shading component Sv (X, Y) calculated based on the luminance of the common area in the vertical direction of the block (X, Y), and the common area in the horizontal direction of the common area
- Sh X, Y 0
- steps S31 and S32 are not limited to the order described above, and step S32 may be executed first, or steps S31 and S32 may be executed in parallel.
- the number of images necessary for calculating the shading component can be reduced as compared with the conventional case. Therefore, it is possible to reduce the number of times of imaging when acquiring these images, and it is possible to perform highly accurate shading correction in a short time with a simple configuration. Further, according to the first embodiment, since the imaging field of view is simply moved in two directions, the horizontal direction and the vertical direction, it is possible to simplify the control of the stage on which the subject SP is placed or the control of the optical system. It becomes.
- the number is not limited to this. As the number of divisions is increased, finer shading correction can be performed. On the other hand, as the number of divisions is reduced, the number of times the subject SP is imaged and the amount of computation in the shading component calculation process and the image correction process can be suppressed, so the total time required for shading correction can be shortened. It becomes.
- the shading component calculation unit 121 calculates the normalized shading component and the non-normalized shading component in each of the horizontal direction and the vertical direction, and the image correction unit 122 uses these shading components. Then, the texture component was calculated using the equations (2-1), (2-2), or (5) or (6) corresponding to the block to be corrected. However, the shading component calculation unit 121 creates a map in which the shading component S (X, Y) applicable to the same texture component calculation formula regardless of the correction target block is stored, and stored in the storage unit 13. May be.
- this normalized shading component is used as the shading component S (X, Y).
- this normalized shading component is stored in the map as the shading component S (X, Y). Is done.
- the value Sh (X, Y) calculated from either of the denormalized shading components and the normalized shading component ) ⁇ Sv (X 0 , Y) or Sv (X, Y) ⁇ Sh (X, Y 0 ) is stored in the map as a shading component S (X, Y).
- FIG. 14 is a schematic diagram for explaining an image capturing method used for calculating shading components in the second embodiment, and the relative positions P0 and P1 of the subject SP with respect to the imaging field of view V of the optical system 30 in the horizontal direction. ,... Are shown in time series (t).
- each of regions obtained by dividing the subject SP by the length Bw in the horizontal direction is defined as subject regions SP (1) to SP (5).
- the image capturing field V of the optical system 30 is moved relative to the subject SP by length Bw (or length Bh) in each of the horizontal direction and the vertical direction.
- Five images were acquired by performing multiple imaging.
- nine images are captured in one direction by capturing images nine times while relatively moving the imaging field of view V with the same length Bw (or length Bh). get.
- FIG. 15 is a schematic diagram for explaining a shading component calculation method according to the second embodiment. Images M (0), M (1),... Shown in FIG. 15A are taken when the subject SP shown in FIG. 14 is located at relative positions P0, P1,. FIG. 15B is a schematic diagram showing the shading component Sh in the horizontal direction stored in the storage unit 13.
- the column X 1 in the image M (0) in which the subject area SP (1) is captured.
- the second and subsequent terms of the numerator and denominator that is, it can be said that the equations (7-1) to (7-5) are obtained by averaging a plurality of shading components calculated based on common regions having different texture components.
- the second direction shading component calculation unit 121b can also calculate a shading component in the vertical direction by executing the same processing as the first direction shading component calculation unit 121a.
- a highly robust shading component can be calculated. Therefore, accurate correction can be stably performed without depending on the characteristics of the texture component in the correction target image. Further, according to the second embodiment, since the shading component is calculated after the luminance is cumulatively added for each column or row, it is not necessary to add a new memory, and the arithmetic processing can be easily performed. .
- the imaging field of view V is moved eight times in each direction and the shading component is calculated based on nine images acquired by performing the imaging nine times. More images may be used by repeating the imaging. Thereby, the robustness of the shading component can be further improved.
- the shading component in the horizontal direction of the column may be acquired by individually calculating a plurality of types of shading components from the combination of a plurality of common regions related to each column as described above, and averaging these shading components. .
- the configuration of the image processing apparatus according to the third embodiment is the same as that of the first embodiment (see FIG. 1) as a whole, and is executed by the first direction shading component calculation unit 121a and the second direction shading component calculation unit 121b.
- the details of the shading component calculation process are different from those of the first embodiment.
- FIGS. 16A to 16C are schematic diagrams for explaining the shading component calculation method according to the third embodiment. As shown in FIG. 6, the imaging field of view V is moved by a length Bw in the horizontal direction. 3 images obtained by performing the imaging three times are shown.
- the first direction shading component calculation unit 121a pays attention to the symmetry in the left-right direction of the common area between the three images M 0 to M 2 , and the following expressions (8-1) to ( The shading component in the horizontal direction is calculated according to 8-5).
- the second direction shading component calculation unit 121b can also calculate the shading component in the vertical direction from the three images by executing the same processing as the first direction shading component calculation unit 121a.
- the image correction method using the normalized shading component and the non-normalized shading component in the horizontal direction and the vertical direction calculated in this way is the same as that of the first embodiment (see FIGS. 11 to 13). ).
- a map in which shading components are stored may be created, and the image may be corrected using this map.
- the third embodiment it is possible to reduce the number of times the imaging field of view V is moved and the number of times of imaging, so that shading correction can be performed in a shorter time.
- FIG. 17 is a schematic diagram for explaining a shading component calculation method according to the fourth embodiment.
- the imaging field of view V is aligned with a certain area in the subject SP to capture an image M 0 , and then the imaging field of view V is shifted by the length Bw in the horizontal direction.
- the image M 1 is acquired by performing imaging.
- at least the central portion of the image which is a flat area is the common area of both the images M 0 and M 1 .
- the length Bw is set so as to be included in.
- the imaging field of view V is set twice or more in a specified block unit. There is no need to shift. Therefore, if the condition for the length Bw described above is satisfied, the imaging field of view V can be shifted by the user arbitrarily moving the stage on which the subject SP is placed in the horizontal direction. In this case, an arbitrary stage movement amount is the length Bw for one block.
- the shift amount between a pair of images selected from a group of images continuously acquired while moving the stage in the horizontal direction may be set as the length Bw.
- the division number of blocks in the horizontal direction is determined by dividing the horizontal length w of the image by the length Bw of one block.
- H 1 (X 2).
- the second-direction shading component calculation unit 121b obtains an image obtained by performing imaging while aligning the imaging field of view V with a certain area in the subject SP, and the imaging field of view V in a vertical direction with respect to this image (a predetermined distance (for example, a shading component is acquired from an image (see FIG. 7) shifted by a length Bh corresponding to one block (see FIG. 5). Even in this case, the length Bh is set so that at least the central portion of the image, which is a flat region, is included in the common region so that a sufficient common region is secured between both images.
- the imaging field of view V is shifted by the user arbitrarily moving the stage on which the subject SP is placed in the vertical direction. May be.
- an arbitrary stage movement amount is the length Bh for one block.
- the shift amount between a pair of images selected from a group of images continuously acquired while moving the stage in the vertical direction may be the length Bh.
- the division number of blocks in the vertical direction is determined by dividing the vertical length h of the image by the length Bh of one block.
- Either the image before or after the imaging field of view V is shifted may be shared with either of the images M 0 and M 1 used by the first direction shading component calculation unit 121a. That is, substantially only one new image obtained by shifting the imaging field of view V in the vertical direction with respect to the image M 0 or M 1 may be acquired.
- the image correction method using the normalized shading component and the non-normalized shading component in the horizontal direction and the vertical direction calculated in this way is the same as that of the first embodiment (see FIGS. 11 to 13). ).
- a map in which shading components are stored may be created, and the image may be corrected using this map.
- the fourth embodiment it is possible to calculate a shading component in the entire image from two sets of images having sufficient common areas in the horizontal direction and the vertical direction. Since one image of each set can be used as another image, the shading component in the entire image can be calculated from at least three images.
- the horizontal shading component is calculated from a pair of images obtained by shifting the visual field in the horizontal direction.
- a final horizontal shading component Sh may be acquired by calculating a plurality of horizontal shading components at the same pixel position and averaging the horizontal shading components.
- the shift amount in the horizontal direction of a plurality of pairs of images may be arbitrary. Thereby, it is possible to suppress a reduction in accuracy of the shading component due to image degradation such as random noise, overexposure, and blackout.
- a plurality of vertical shading components at the same pixel position are calculated from a plurality of pairs of images obtained by shifting the imaging field of view V in the vertical direction, and the vertical shading is calculated.
- the final vertical shading component Sv may be acquired by averaging the components.
- the texture component T (X, Y) of an arbitrary pixel in the block for which the normalized shading component is not obtained is calculated using one of Expression (5) and Expression (6).
- the shading component calculation unit 121 may calculate a combined shading component obtained by weighting and combining the shading components used in these equations (5) and (6).
- the denormalized shading component Sh (X, Y) calculated with reference to the block of the common area in the horizontal direction and the block of the common area in the vertical direction.
- a shading component consisting of the normalized shading component Sv (X 0 , Y) calculated on the basis of the block (3, 3) in the flat region which is the common region is defined as a shading component Shv 1 (X, Y) (formula (Refer to (11)).
- Shv 1 (X, Y) Sh (X, Y) ⁇ Sv (X 0 , Y) (11)
- the non-normalized shading component Sv (X, Y) calculated with reference to the block in the common area in the vertical direction and the common in the horizontal direction of the block in the common area
- a shading component composed of the normalized shading component Sh (X, Y 0 ) calculated on the basis of the block (3, 3) in the flat region as the region is defined as a shading component Shv 2 (X, Y) (formula ( 12)).
- Shv 2 (X, Y) Sv (X, Y) ⁇ Sh (X, Y 0 ) (12)
- a combined shading component S (X, Y) obtained by weighting and combining these shading components Shv 1 (X, Y) and Shv 2 (X, Y) is given by the following equation (13).
- w (X, Y) is a weight used for combining the shading components.
- the weight w (X, Y) can be determined based on, for example, the ratio of the sum of the edge amounts, as shown in the following equation (14). .
- the parameter ⁇ is a normalization coefficient.
- Edge h [] represents the sum of the horizontal edge amounts in the target region (block (X, Y) or (X, Y 0 )) of the distribution of shading components in the horizontal direction.
- Edge v [] indicates the total sum of edge amounts in the vertical direction in the target region (block (X 0 , Y) or (X, Y)) of the distribution of shading components in the vertical direction.
- the sum of the edge amounts in the blocks (X, Y) and (X 0 , Y) used for the calculation of the shading component Shv 1 (X, Y) is the block used for calculating the shading component Shv 2 (X, Y) (
- the value of the weight w (X, Y) is also small. Accordingly, the contribution of the shading component Shv 1 in equation (13) is increased.
- the second image correction unit 122b calculates the texture component T (X, Y) by the following equation (15) for the block (X, Y) for which the normalized shading component is not obtained.
- the fifth embodiment it is possible to perform robust shading correction regardless of the direction (horizontal direction or vertical direction) in which the shading component is calculated.
- the smooth composite shading component S is calculated by setting the weight w (X, Y) according to the equation (14).
- a smoother combined shading component S may be generated by combining filter processing.
- Embodiments 1 to 5 described above can also be implemented in combination with each other.
- shading components can be calculated in the same manner as in the third embodiment. Therefore, using the shading components calculated from the three combinations, three composite shading components S (X, Y) are calculated in the same manner as in the fifth embodiment. Eventually, a combined shading component based on five images and three combined shading components S (X, Y) based on a combination of the three images are obtained for one block. By adding and averaging these four synthetic shading components S (X, Y), a more robust synthetic shading component can be obtained.
- a shading component is calculated from nine images as in the second embodiment, and further, as in the fifth embodiment, a combined shading component S (X, Y) is calculated.
- a shading component is calculated from the combination of three consecutive images among the nine images in the same manner as in the third embodiment, and further, as in the fifth embodiment, the combined shading component S (X , Y).
- a more robust shading component can also be calculated by averaging these combined shading components S (X, Y).
- FIG. 18 is a block diagram illustrating a configuration example of the image processing apparatus according to the seventh embodiment.
- the image processing apparatus 2 according to the seventh embodiment has an image processing unit in which a flat area search unit 211 is further provided in the image processing unit 12 instead of the image processing unit 12 shown in FIG. 21 is provided.
- the central area of the image is regarded as a flat area where the shading component is uniform, and the shading component in the area other than the central block is calculated.
- the shading component is calculated after searching for a flat region in the image.
- the image processing unit 21 Prior to the search for the flat area, the image processing unit 21 obtains an image obtained by performing imaging while moving the imaging field of view by a predetermined amount in each of the horizontal direction and the vertical direction, as in the first embodiment. (For example, in the case of the horizontal direction, the images M 0 to M 4 shown in FIG. 8) are fetched from the image acquisition unit 11.
- the flat area search unit 211 detects the vertical direction image shown in FIG. 20 from the luminance of an arbitrary pixel included in the block having the same texture component among the images acquired by moving the imaging field of view V in the vertical direction. Fv is obtained and stored in the storage unit 13.
- FIG. 21 is a schematic diagram showing the horizontal image Fh shown in FIG. 19 in units of pixels.
- FIG. 22 is a schematic diagram showing the vertical image Fv shown in FIG. 20 in units of pixels. 21 and 22, one block (X, Y) is assumed to be composed of 5 pixels ⁇ 5 pixels for the sake of simplicity.
- pixel coordinates are indicated by (x, y)
- block coordinates are indicated by (X, Y).
- the block (X, Y) for which the gradient is to be calculated is defined as ROI (X, Y).
- the flat area search unit 211 calculates the gradient N (x, y) of the shading component given by the following equation (17) for each block (X, Y) from the horizontal image Fh and the vertical image Fv. .
- Equation (17) represents the size (length) of each block in the horizontal direction, and is represented by the number of pixels. Further, the symbol Bh shown in the equation (17) represents the size (length) of each block in the vertical direction and is indicated by the number of pixels (see FIG. 5).
- the shading component gradient of each block is not limited to the shading component gradient of the pixel located at the center of the ROI, but the statistical values (total value, average value, maximum value) of the shading component gradient of all pixels in the ROI. Frequency values, median values, etc.), or statistical values (same as above) of gradients of shading components in some pixels in the ROI.
- the first direction shading component calculation unit 121a and the second direction shading component calculation unit 121b are based on the searched flat region (that is, Assuming that the shading component is 1.0), the shading component in each direction is calculated.
- FIG. 23 is a diagram schematically illustrating the shading component Sh in the horizontal direction stored in the storage unit 13.
- FIG. 24 is a diagram schematically illustrating the shading component Sv in the vertical direction stored in the storage unit 13. In FIG. 23 and FIG. 24, shaded hatching is added to columns and rows including flat regions.
- the second image correction unit 122b calculates the texture component T (X, Y) in the block for which the normalized shading component is not obtained using the following equation (20) or (21).
- the shading component in each block is calculated with 1.0 as the shading component in the searched flat region.
- the shading component in the flat region is not 1.0.
- the image correcting unit 122 may normalize the texture component calculated by the shading correction with a uniform gain over the entire image.
- the block including the pixel having the smallest gradient N (x, y) is defined as a flat region.
- a threshold for the gradient N (x, y) is set, and the gradient N (x, y) is set. All blocks in which y) is equal to or less than the threshold may be set as a flat area.
- the final shading component may be calculated by weighting and combining the shading components calculated on the basis of each flat region based on the value of the gradient N (x, y).
- FIG. 25 is a diagram illustrating a configuration example of a microscope system according to the eighth embodiment.
- the microscope system 6 according to the eighth embodiment includes a microscope device 3, an image processing device 4, and a display device 5.
- the microscope apparatus 3 includes a substantially C-shaped arm 300 provided with an epi-illumination unit 301 and a transmission illumination unit 302, a sample stage 303 attached to the arm 300 and on which a subject SP to be observed is placed, a mirror An objective lens 304 provided on one end side of the tube 305 so as to face the sample stage 303 via the trinocular tube unit 308, an imaging unit 306 provided on the other end side of the lens tube 305, and the sample stage 303 And a stage position changing unit 307 to be moved.
- the trinocular tube unit 308 branches the observation light of the subject SP incident from the objective lens 304 into an imaging unit 306 and an eyepiece unit 309 described later.
- the eyepiece unit 309 is for the user to directly observe the subject SP.
- the epi-illumination unit 301 includes an epi-illumination light source 301a and an epi-illumination optical system 301b, and irradiates the subject SP with epi-illumination light.
- the epi-illumination optical system 301b includes various optical members (filter unit, shutter, field stop, aperture stop, etc.) that collect the illumination light emitted from the epi-illumination light source 301a and guide it in the direction of the observation optical path L.
- the transmitted illumination unit 302 includes a transmitted illumination light source 302a and a transmitted illumination optical system 302b, and irradiates the subject SP with transmitted illumination light.
- the transmission illumination optical system 302b includes various optical members (filter unit, shutter, field stop, aperture stop, etc.) that collect the illumination light emitted from the transmission illumination light source 302a and guide it in the direction of the observation optical path L.
- the objective lens 304 is attached to a revolver 310 that can hold a plurality of objective lenses having different magnifications (for example, objective lenses 304 and 304 ′). By rotating the revolver 310 and changing the objective lenses 304 and 304 ′ facing the sample stage 303, the imaging magnification can be changed.
- a revolver 310 that can hold a plurality of objective lenses having different magnifications (for example, objective lenses 304 and 304 ′).
- a zoom unit including a plurality of zoom lenses and a drive unit (none of which is shown) that changes the positions of these zoom lenses.
- the zoom unit enlarges or reduces the subject image within the imaging field of view by adjusting the position of each zoom lens.
- An encoder may be further provided in the drive unit in the lens barrel 305. In this case, the output value of the encoder may be output to the image processing device 4, and the image processing device 4 may detect the position of the zoom lens from the output value of the encoder and automatically calculate the imaging magnification.
- the imaging unit 306 includes an imaging device such as a CCD or CMOS, and has pixel levels (pixel values) in each band of R (red), G (green), and B (blue) in each pixel included in the imaging device. This is a camera that can capture a color image, and operates at a predetermined timing in accordance with the control of the imaging control unit 111 of the image processing apparatus 4.
- the imaging unit 306 receives light (observation light) incident from the objective lens 304 via the optical system in the lens barrel 305, generates image data corresponding to the observation light, and outputs the image data to the image processing device 4.
- the imaging unit 306 may convert the pixel value represented in the RGB color space into a pixel value represented in the YCbCr color space and output the pixel value to the image processing device 4.
- the stage position changing unit 307 includes, for example, a ball screw (not shown) and a stepping motor 307a, and changes the imaging field of view by moving the position of the sample stage 303 in the XY plane. Further, the stage position changing unit 307 focuses the objective lens 304 on the subject SP by moving the sample stage 303 along the Z axis. Note that the configuration of the stage position changing unit 307 is not limited to the configuration described above, and for example, an ultrasonic motor or the like may be used.
- the position of the optical system including the objective lens 304 is fixed, and the imaging field of view for the subject SP is changed by moving the sample stage 303 side.
- the objective lens 304 is used as the optical axis.
- a moving mechanism for moving in an orthogonal plane may be provided, the specimen stage 303 may be fixed, and the objective lens 304 side may be moved to change the imaging field of view.
- both the specimen stage 303 and the objective lens 304 may be relatively moved.
- the image processing apparatus 4 includes an image acquisition unit 11, an image processing unit 41, and a storage unit 13. Among these, the configurations and operations of the image acquisition unit 11 and the storage unit 13 are the same as those in the first embodiment (see FIG. 1).
- the drive control unit 112 indicates the position of the sample stage 303 by instructing the drive coordinates of the sample stage 303 at a predetermined pitch based on the value of the scale mounted on the sample stage 303. Although the control is performed, the position control of the specimen stage 303 may be performed based on the result of image matching such as template matching based on the image acquired by the microscope apparatus 3.
- the imaging field of view V is moved in the horizontal direction within the plane of the subject SP and then moved in the vertical direction, the control of the sample stage 303 is very easy.
- the image processing unit 41 further includes a VS image creation unit 411 with respect to the image processing unit 12 shown in FIG.
- the VS image creation unit 411 creates a virtual slide (VS) image based on the plurality of images subjected to the shading correction by the image correction unit 122.
- the virtual slide image is a wide-field image created by pasting together a plurality of images having different imaging fields of view, and the microscope system 6 has a virtual slide image creation function.
- an image captured by the microscope apparatus 3 is to be pasted as it is, an unnatural boundary occurs at the joint where the images are pasted due to the influence of shading that occurs according to the characteristics of the optical system. . Therefore, in the eighth embodiment, the images after the shading correction by the image correction unit 122 are pasted together.
- the configurations and operations of the shading component calculation unit 121 and the image correction unit 122 are the same as those in the first embodiment.
- the shading component calculation unit 121 and the image correction unit 122 may be operated as in any of the second to fifth embodiments.
- a flat area search unit 211 may be further provided.
- the image acquisition unit 11 and the image processing unit 12 may be configured by dedicated hardware, or may be configured by reading a predetermined program into hardware such as a CPU.
- a control program for causing the image acquisition unit 11 to perform control of an imaging operation for the microscope apparatus 3 to be described later, an image processing program for causing the image processing unit 12 to perform image processing including shading correction, and the like may be stored in the storage unit 13.
- the display device 5 is configured by a display device such as an LCD, an EL display, or a CRT display, for example, and displays an image output from the image processing device 4 and related information.
- a display device such as an LCD, an EL display, or a CRT display, for example, and displays an image output from the image processing device 4 and related information.
- FIG. 26 is a schematic diagram for explaining an operation of acquiring a plurality of images according to the eighth embodiment.
- imaging is performed with the imaging field of view moving at a narrower pitch (for example, set to length Bh).
- a narrower pitch for example, set to length Bh.
- the moving direction of the imaging field when acquiring the image for creating the virtual slide image and the moving direction of the imaging field when acquiring the image for calculating the shading component are common. Therefore, these images can be acquired efficiently without wastefully moving the specimen stage 303. In addition, since some of the images for creating the virtual slide image can be used for creating the shading component, the number of times of imaging can be reduced.
- the position for acquiring the shading component calculation can be arbitrarily set.
- shading component calculation images may be acquired at a plurality of locations on the image acquisition path for creating a virtual slide image, and the shading components calculated for each location where the images are acquired may be combined. In this case, robustness in shading correction can be improved by using the combined shading component.
- the image for virtual slide image creation may be aligned by the value of the scale provided on the stage, the number of pulses of the stepping motor 307a, image matching, or a combination thereof.
- the imaging visual field V is simply moved in two directions in the plane of the subject SP, such alignment can be easily performed.
- the present invention is not limited to the above-described first to eighth embodiments, and various inventions are formed by appropriately combining a plurality of constituent elements disclosed in the first to eighth embodiments. be able to. For example, some components may be excluded from all the components shown in the first to eighth embodiments. Or you may form combining the component shown in different embodiment suitably.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Microscoopes, Condenser (AREA)
- Image Processing (AREA)
Abstract
Description
図1は、本発明の実施の形態1に係る画像処理装置の構成例を示すブロック図である。図1に示すように、実施の形態1に係る画像処理装置1は、観察対象である被写体が写った画像を取得する画像取得部11と、該画像に画像処理を施す画像処理部12と、記憶部13とを備える。
なお、ステップS31及びS32を実行する順序は上述した順序に限らず、ステップS32を先に実行しても良いし、ステップS31及びS32を並行に実行しても良い。
本発明の実施の形態1の変形例1について説明する。
上記実施の形態1においては、シェーディング成分算出部121が水平方向及び垂直方向の各方向において正規化シェーディング成分及び非正規化シェーディング成分を算出し、画像補正部122が、これらのシェーディング成分を用いて、補正対象のブロックに応じた式(2-1)、(2-2)、又は、(5)若しくは(6)を用いてテクスチャ成分を算出した。しかしながら、シェーディング成分算出部121が、補正対象のブロックによらず同一のテクスチャ成分の演算式に適用可能なシェーディング成分S(X,Y)が格納されたマップを作成し、記憶部13に記憶させても良い。
次に、本発明の実施の形態2について説明する。
本実施の形態2に係る画像処理装置の構成は、全体として実施の形態1(図1参照)と同様であり、第1方向シェーディング成分算出部121a及び第2方向シェーディング成分算出部121bが実行するシェーディング成分の算出処理の詳細が実施の形態1と異なる。
本発明の実施の形態2の変形例2について説明する。
上述したような各列に関する複数の共通領域の組み合わせから複数種類のシェーディング成分を個別に算出し、これらのシェーディング成分を加算平均することにより、当該列の水平方向におけるシェーディング成分を取得しても良い。例えば、列X=1に関しては、画像M(0)の列X=1と画像M(2)の列X=3、画像M(1)の列X=1と画像M(3)の列X=3、画像M(2)の列X=1と画像M(4)の列X=3、画像M(3)の列X=1と画像M(5)の列X=3、画像M(4)の列X=1と画像M(6)の列X=3、の5通りの共通領域の組み合わせから5種類のシェーディング成分を算出し、これらを加算平均する。垂直方向におけるシェーディング成分についても同様である。
次に、本発明の実施の形態3について説明する。
本実施の形態3に係る画像処理装置の構成は、全体として実施の形態1(図1参照)と同様であり、第1方向シェーディング成分算出部121a及び第2方向シェーディング成分算出部121bが実行するシェーディング成分の算出処理の詳細が実施の形態1と異なる。
次に、本発明の実施の形態4について説明する。
本実施の形態4に係る画像処理装置の構成は、全体として実施の形態1(図1参照)と同様であり、第1方向シェーディング成分算出部121a及び第2方向シェーディング成分算出部121bが実行するシェーディング成分の算出処理の詳細が実施の形態1と異なる。図17は、本実施の形態4におけるシェーディング成分の算出方法を説明するための模式図である。
次に、本発明の実施の形態5について説明する。
本実施の形態5に係る画像処理装置の構成は、全体として実施の形態1(図1参照)と同様であり、シェーディング成分算出部121が実行するシェーディング成分の算出処理の詳細が実施の形態1と異なる。
Shv1(X,Y)=Sh(X,Y)×Sv(X0,Y) …(11)
Shv2(X,Y)=Sv(X,Y)×Sh(X,Y0) …(12)
次に、本発明の実施の形態6について説明する。上述した実施の形態1~5は、互いに組み合わせて実施することも可能である。
例えば、実施の形態1、3、5を組み合わせる場合、まず、実施の形態1と同様に、水平方向及び垂直方向の各々に対し、撮像視野Vを移動させながら5回撮像を行うことにより5枚の画像(例えば画像M0~M4)を取得する。そして、これらの5枚の画像に基づいて、水平方向及び垂直方向におけるシェーディング成分を算出し、実施の形態5と同様に、正規化シェーディング成分が得られていないブロックに対して合成シェーディング成分S(X,Y)を算出する。
次に、本発明の実施の形態7について説明する。
図18は、本実施の形態7に係る画像処理装置の構成例を示すブロック図である。図18に示すように、本実施の形態7に係る画像処理装置2は、図1に示す画像処理部12の代わりに、該画像処理部12に平坦領域探索部211をさらに設けた画像処理部21を備える。
次に、本発明の実施の形態8について説明する。
図25は、本実施の形態8に係る顕微鏡システムの構成例を示す図である。図25に示すように、本実施の形態8に係る顕微鏡システム6は、顕微鏡装置3と、画像処理装置4と、表示装置5とを備える。
3 顕微鏡装置
5 表示装置
6 顕微鏡システム
11 画像取得部
12、21、41 画像処理部
13 記憶部
30 光学系
111 撮像制御部
112 駆動制御部
121 シェーディング成分算出部
121a 第1方向シェーディング成分算出部
121b 第2方向シェーディング成分算出部
122 画像補正部
122a 第1画像補正部
122b 第2画像補正部
211 平坦領域探索部
300 アーム
301 落射照明ユニット
301a 落射照明用光源
301b 落射照明光学系
302 透過照明ユニット
302a 透過照明用光源
302b 透過照明光学系
303 標本ステージ
304、304’ 対物レンズ
305 鏡筒
306 撮像部
307 ステージ位置変更部
307a ステッピングモータ
308 三眼鏡筒ユニット
309 接眼レンズユニット
310 レボルバ
411 VS画像作成部
Claims (17)
- 互いに異なる第1及び第2の方向のそれぞれにおいて、少なくとも他の1枚の画像との間で被写体の一部が共通する複数の画像を含む第1及び第2の画像群を取得する画像取得部と、
前記第1及び第2の画像群の各々について、1の画像のうちシェーディング成分が一定である平坦領域を含む領域における輝度に対し、該領域と共通の被写体が写った他の画像内の領域における輝度の比をシェーディング成分として算出するシェーディング成分算出部と、
前記シェーディング成分を用いて、前記画像内の領域に対してシェーディング補正を行う画像補正部と、
を備え、
前記シェーディング成分は、前記平坦領域における輝度を基準とする正規化シェーディング成分と、前記平坦領域以外の領域における輝度を基準とする非正規化シェーディング成分とを含み、
前記画像補正部は、前記正規化シェーディング成分及び前記非正規化シェーディング成分をもとに前記シェーディング補正を行う、
ことを特徴とする画像処理装置。 - 前記シェーディング成分算出部は、
前記第1の画像群に基づき、前記1の画像のうちの前記平坦領域を含み、該平坦領域に対して前記第2の方向に並ぶ領域における輝度をもとに、前記シェーディング成分を算出する第1シェーディング成分算出部と、
前記第2の画像群に基づき、前記1の画像のうちの前記平坦領域を含み、該平坦領域に対して前記第1の方向に並ぶ領域における輝度をもとに、前記シェーディング成分を算出する第2シェーディング成分算出部と、
を備えることを特徴とする請求項1に記載の画像処理装置。 - 前記画像補正部は、
前記画像内の領域のうち、前記正規化シェーディング成分が算出された領域である第1の領域に対し、該正規化シェーディング成分を用いてシェーディング補正を行う第1画像補正部と、
前記画像内の領域のうち、前記正規化シェーディング成分が算出されていない領域である第2の領域に対し、該第2の領域について算出された前記非正規化シェーディング成分と、該非正規化シェーディング成分を算出する際に基準とされた領域について算出された前記正規化シェーディング成分とを用いて、前記シェーディング補正を行う第2画像補正部と、
を備えることを特徴とする請求項1又は2に記載の画像処理装置。 - 前記第2画像補正部は、前記第1及び第2の画像群の一方から算出された前記非正規化シェーディング成分と、前記第1及び第2の画像群の他方から算出された正規化シェーディング成分とを用いて、前記シェーディング補正を行う、ことを特徴とする請求項3に記載の画像処理装置。
- 前記シェーディング成分算出部は、前記1の画像内の一部の領域に関するシェーディング成分が既知である場合に、該一部の領域における輝度と、前記一部の領域に対して共通の被写体が写った前記他の画像内の領域における輝度と、既知である前記シェーディング成分と、を用いて、前記他の画像内の領域に関するシェーディング成分を算出する、ことを特徴とする請求項1~4のいずれか1項に記載の画像処理装置。
- 前記シェーディング成分算出部は、前記第1及び第2の画像群の各々に対し、前記1の画像と前記他の画像との複数の組み合わせに基づいて、複数の前記他の画像内の領域に関するシェーディング成分をそれぞれ算出し、算出した複数の該シェーディング成分を加算平均する、ことを特徴とする請求項5に記載の画像処理装置。
- 前記シェーディング成分算出部は、前記第1及び第2の画像群の各々に対し、前記1の画像と前記他の画像との複数の組み合わせであって、前記1の画像と前記他の画像との間で共通の被写体が写った領域である共通領域におけるテクスチャ成分が互いに異なる複数の組み合わせに基づいて、前記シェーディング成分を算出する、ことを特徴とする請求項1~5のいずれか1項に記載の画像処理装置。
- 前記シェーディング成分算出部は、前記第1及び第2の画像群の各々に対し、前記複数の画像間で対応する領域における輝度を累積加算し、該累積加算した値を用いて、前記シェーディング成分を算出する、ことを特徴とする請求項7に記載の画像処理装置。
- 前記シェーディング成分算出部は、前記複数の組み合わせに基づいて複数のシェーディング成分をそれぞれ算出し、該複数のシェーディング成分を加算平均する、ことを特徴とする請求項7に記載の画像処理装置。
- 前記シェーディング成分算出部は、前記画像内の領域に対して前記第1及び第2の画像群に基づいてそれぞれ算出された2つの非正規化シェーディング成分と、該2つの非正規化シェーディング成分をそれぞれ算出する際に基準とされた2つの領域に対してそれぞれ算出された2つの正規化シェーディング成分とを用いて、前記画像内の領域におけるシェーディング成分を算出する、ことを特徴とする請求項1~9いずれか1項に記載の画像処理装置。
- 前記1の画像と前記他の画像との間で共通の被写体が写った領域に含まれる画素の輝度の勾配に基づいて前記平坦領域を探索する平坦領域探索部をさらに備える、ことを特徴とする請求項1~10のいずれか1項に記載の画像処理装置。
- 前記第1及び第2の方向は互いに直交する、ことを特徴とする請求項1~11のいずれか1項に記載の画像処理装置。
- 請求項1~12のいずれか1項に記載の画像処理装置と、
前記被写体の像を生成する光学系と、
前記被写体と前記光学系との少なくとも一方を移動させることにより、前記被写体に対する前記光学系の視野を移動させる移動手段と、
前記被写体を撮像する撮像手段と、
を備え、
前記画像取得部は、前記移動手段に前記視野を前記第1及び第2の方向にそれぞれ移動させながら、前記撮像手段に撮像を実行させる制御を行うことにより、前記第1及び第2の画像群をそれぞれ取得する、
ことを特徴とする撮像装置。 - 請求項13に記載の撮像装置と、
前記被写体を載置するステージと、
を備え、
前記移動手段は、前記ステージと前記光学系との少なくとも一方を移動させる、ことを特徴とする顕微鏡システム。 - 前記画像処理装置は、前記第1及び第2の画像群に含まれる前記視野が隣り合う画像同士を貼り合わせることによりバーチャルスライド画像を作成するバーチャルスライド作成部をさらに備える、ことを特徴とする請求項14に記載の顕微鏡システム。
- 互いに異なる第1及び第2の方向において、少なくとも他の1枚の画像との間で被写体の一部が共通する複数の画像を含む第1及び第2の画像群をそれぞれ取得する画像取得ステップと、
前記第1及び第2の画像群の各々について、1の画像のうちシェーディング成分が一定である平坦領域を含む領域における輝度に対し、該領域と共通の被写体が写った他の画像内の領域における輝度の比をシェーディング成分として算出するシェーディング成分算出ステップと、
前記シェーディング成分を用いて、前記画像内の領域に対してシェーディング補正を行う画像補正ステップと、
を含み、
前記シェーディング成分は、前記平坦領域における輝度を基準とする正規化シェーディング成分と、前記平坦領域以外の領域における輝度を基準とする非正規化シェーディング成分とを含み、
前記画像補正ステップは、前記正規化シェーディング成分及び前記非正規化シェーディング成分をもとに前記シェーディング補正を行う、
ことを特徴とする画像処理方法。 - 互いに異なる第1及び第2の方向において、少なくとも他の1枚の画像との間で被写体の一部が共通する複数の画像を含む第1及び第2の画像群をそれぞれ取得する画像取得ステップと、
前記第1及び第2の画像群の各々について、1の画像のうちシェーディング成分が一定である平坦領域を含む領域における輝度に対し、該領域と共通の被写体が写った他の画像内の領域における輝度の比をシェーディング成分として算出するシェーディング成分算出ステップと、
前記シェーディング成分を用いて、前記画像内の領域に対してシェーディング補正を行う画像補正ステップと、
をコンピュータに実行させ、
前記シェーディング成分は、前記平坦領域における輝度を基準とする正規化シェーディング成分と、前記平坦領域以外の領域における輝度を基準とする非正規化シェーディング成分とを含み、
前記画像補正ステップは、前記正規化シェーディング成分及び前記非正規化シェーディング成分をもとに前記シェーディング補正を行う、
ことを特徴とする画像処理プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112014006672.9T DE112014006672T5 (de) | 2014-06-10 | 2014-12-01 | Bildverarbeitungsvorrichtung, Bildgebungsvorrichtung, Mikroskopsystem, Bildverarbeitungsverfahren und ein Bildverarbeitungsprogramm |
CN201480079674.3A CN106461928B (zh) | 2014-06-10 | 2014-12-01 | 图像处理装置、摄像装置、显微镜系统以及图像处理方法 |
JP2016527610A JP6422967B2 (ja) | 2014-06-10 | 2014-12-01 | 画像処理装置、撮像装置、顕微鏡システム、画像処理方法及び画像処理プログラム |
US15/343,692 US9990752B2 (en) | 2014-06-10 | 2016-11-04 | Image processing device, imaging device, microscope system, image processing method, and computer-readable recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014119868 | 2014-06-10 | ||
JP2014-119868 | 2014-06-10 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/343,692 Continuation US9990752B2 (en) | 2014-06-10 | 2016-11-04 | Image processing device, imaging device, microscope system, image processing method, and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015190013A1 true WO2015190013A1 (ja) | 2015-12-17 |
Family
ID=54833133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/081787 WO2015190013A1 (ja) | 2014-06-10 | 2014-12-01 | 画像処理装置、撮像装置、顕微鏡システム、画像処理方法及び画像処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US9990752B2 (ja) |
JP (1) | JP6422967B2 (ja) |
CN (1) | CN106461928B (ja) |
DE (1) | DE112014006672T5 (ja) |
WO (1) | WO2015190013A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019139247A (ja) * | 2015-04-30 | 2019-08-22 | カール ツァイス マイクロスコピー ゲーエムベーハーCarl Zeiss Microscopy Gmbh | 反射を低減したコントラスト画像を生成するための方法およびこれに関連する装置 |
CN113014805A (zh) * | 2021-02-08 | 2021-06-22 | 北京大学 | 一种仿视网膜中央凹与外周的联合采样方法及装置 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3367153B1 (en) | 2017-02-27 | 2022-02-09 | Leica Instruments (Singapore) Pte. Ltd. | Coded zoom knob, microscope having the same and method for retrofitting a coded zoom knob |
WO2018163572A1 (ja) * | 2017-03-10 | 2018-09-13 | 富士フイルム株式会社 | 画像処理システム、画像処理装置、画像処理方法及び画像処理プログラム |
CN111757029A (zh) * | 2018-03-26 | 2020-10-09 | 上海小蚁科技有限公司 | 阴影校正检测参数确定、校正检测方法及装置、存储介质、鱼眼相机 |
CN110794569B (zh) * | 2019-11-14 | 2021-01-26 | 武汉兰丁智能医学股份有限公司 | 细胞微型显微图像采集装置及图像识别方法 |
DE102021203872A1 (de) | 2021-04-19 | 2022-10-20 | Carl Zeiss Microscopy Gmbh | Verfahren zur Erzeugung eines Helligkeitskorrekturbilds und Vorrichtung |
CN115914601A (zh) * | 2022-11-04 | 2023-04-04 | 展讯半导体(南京)有限公司 | 图像处理方法、系统、电子设备、可读存储介质及芯片 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1155558A (ja) * | 1997-08-06 | 1999-02-26 | Minolta Co Ltd | デジタルカメラ |
JP2005039339A (ja) * | 2003-07-15 | 2005-02-10 | Fuji Photo Film Co Ltd | 撮影装置 |
JP2013257422A (ja) * | 2012-06-12 | 2013-12-26 | Olympus Corp | 顕微鏡システム |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1985995A3 (en) * | 1996-08-16 | 2009-09-16 | GE Healthcare Niagara Inc. | A digital imaging system for assays in well plates, gels and blots |
US6571022B2 (en) * | 1997-08-06 | 2003-05-27 | Minolta Co., Ltd. | Image processing apparatus for generating a single image of an object using overlapping partial images |
JP2006171213A (ja) | 2004-12-14 | 2006-06-29 | Nikon Corp | 顕微鏡システム |
JP2008051773A (ja) | 2006-08-28 | 2008-03-06 | Hamamatsu Photonics Kk | 蛍光画像取得装置、及び蛍光画像取得方法 |
CN201203699Y (zh) * | 2008-01-27 | 2009-03-04 | 麦克奥迪实业集团有限公司 | 体视显微镜上光源照明装置 |
US8339475B2 (en) * | 2008-12-19 | 2012-12-25 | Qualcomm Incorporated | High dynamic range image combining |
JP5830328B2 (ja) * | 2011-09-26 | 2015-12-09 | オリンパス株式会社 | 内視鏡用画像処理装置、内視鏡装置及び画像処理方法 |
US9881373B2 (en) * | 2012-12-07 | 2018-01-30 | Canon Kabushiki Kaisha | Image generating apparatus and image generating method |
WO2015072306A1 (ja) * | 2013-11-14 | 2015-05-21 | 日本電気株式会社 | 画像処理システム |
JP6381215B2 (ja) * | 2014-01-29 | 2018-08-29 | キヤノン株式会社 | 画像処理装置、画像処理方法、表示装置、表示装置の制御方法、及び、プログラム |
JP6520919B2 (ja) * | 2014-03-28 | 2019-05-29 | 日本電気株式会社 | 画像補正装置、画像補正方法およびプログラム |
JP6075315B2 (ja) * | 2014-03-28 | 2017-02-08 | 株式会社Jvcケンウッド | 合成画像作成システム、合成画像作成方法、及び合成画像作成プログラム |
JP6443857B2 (ja) * | 2014-06-05 | 2018-12-26 | キヤノン株式会社 | 画像処理装置、画像処理方法、及び、プログラム |
-
2014
- 2014-12-01 JP JP2016527610A patent/JP6422967B2/ja not_active Expired - Fee Related
- 2014-12-01 CN CN201480079674.3A patent/CN106461928B/zh active Active
- 2014-12-01 WO PCT/JP2014/081787 patent/WO2015190013A1/ja active Application Filing
- 2014-12-01 DE DE112014006672.9T patent/DE112014006672T5/de not_active Withdrawn
-
2016
- 2016-11-04 US US15/343,692 patent/US9990752B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1155558A (ja) * | 1997-08-06 | 1999-02-26 | Minolta Co Ltd | デジタルカメラ |
JP2005039339A (ja) * | 2003-07-15 | 2005-02-10 | Fuji Photo Film Co Ltd | 撮影装置 |
JP2013257422A (ja) * | 2012-06-12 | 2013-12-26 | Olympus Corp | 顕微鏡システム |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019139247A (ja) * | 2015-04-30 | 2019-08-22 | カール ツァイス マイクロスコピー ゲーエムベーハーCarl Zeiss Microscopy Gmbh | 反射を低減したコントラスト画像を生成するための方法およびこれに関連する装置 |
CN113014805A (zh) * | 2021-02-08 | 2021-06-22 | 北京大学 | 一种仿视网膜中央凹与外周的联合采样方法及装置 |
WO2022165873A1 (zh) * | 2021-02-08 | 2022-08-11 | 北京大学 | 一种仿视网膜中央凹与外周的联合采样方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2015190013A1 (ja) | 2017-04-20 |
US9990752B2 (en) | 2018-06-05 |
JP6422967B2 (ja) | 2018-11-14 |
CN106461928A (zh) | 2017-02-22 |
CN106461928B (zh) | 2019-09-17 |
DE112014006672T5 (de) | 2017-02-23 |
US20170076481A1 (en) | 2017-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6422967B2 (ja) | 画像処理装置、撮像装置、顕微鏡システム、画像処理方法及び画像処理プログラム | |
JP6196832B2 (ja) | 画像処理装置、画像処理方法、顕微鏡システム及び画像処理プログラム | |
JP5940383B2 (ja) | 顕微鏡システム | |
EP2796916B1 (en) | Image processing device, imaging device, microscope system, image processing method, and image processing program | |
US20120105612A1 (en) | Imaging apparatus, endoscope apparatus, and image generation method | |
JP2016024052A (ja) | 3次元計測システム、3次元計測方法及びプログラム | |
US10656406B2 (en) | Image processing device, imaging device, microscope system, image processing method, and computer-readable recording medium | |
WO2013161348A1 (ja) | 画像処理プログラム及び画像処理装置 | |
JP2015119344A (ja) | 撮像素子の感度分布の測定装置及びその制御方法、画像表示装置のキャリブレーション装置及びその制御方法 | |
JP6487938B2 (ja) | 画像処理装置、撮像装置、顕微鏡システム、画像処理方法及び画像処理プログラム | |
JP2019016975A (ja) | 画像処理装置および画像処理方法、撮像装置、プログラム | |
JP5996462B2 (ja) | 画像処理装置、顕微鏡システム及び画像処理方法 | |
JP6422761B2 (ja) | 顕微鏡システム、及び、z位置と補正装置の設定値との関係算出方法 | |
JP6800648B2 (ja) | 画像処理装置及びその制御方法、プログラム並びに撮像装置 | |
JP2016201600A (ja) | 画像処理装置、撮像装置、画像処理方法、画像処理プログラム、および、記憶媒体 | |
JP7319840B2 (ja) | 画像処理装置、撮像装置、画像処理方法、及びプログラム | |
JP6838608B2 (ja) | 画像処理装置及び画像処理方法 | |
WO2017068655A1 (ja) | 画像処理装置、画像処理システム、顕微鏡システム、画像処理方法、及び画像処理プログラム | |
CN113947534A (zh) | 影像校正方法及装置 | |
JP6604737B2 (ja) | 画像処理装置、撮像装置、画像処理方法、画像処理プログラム、および、記憶媒体 | |
JP5982245B2 (ja) | 画像処理装置及び画像処理方法 | |
JP2015222310A (ja) | 顕微鏡システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14894373 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016527610 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112014006672 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14894373 Country of ref document: EP Kind code of ref document: A1 |