WO2016056205A1 - Image acquisition device, image acquisition method, and program - Google Patents

Image acquisition device, image acquisition method, and program Download PDF

Info

Publication number
WO2016056205A1
WO2016056205A1 PCT/JP2015/004988 JP2015004988W WO2016056205A1 WO 2016056205 A1 WO2016056205 A1 WO 2016056205A1 JP 2015004988 W JP2015004988 W JP 2015004988W WO 2016056205 A1 WO2016056205 A1 WO 2016056205A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
range
image acquisition
optical axis
axis direction
Prior art date
Application number
PCT/JP2015/004988
Other languages
French (fr)
Inventor
Yui Sakuma
Original Assignee
Canon Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Kabushiki Kaisha filed Critical Canon Kabushiki Kaisha
Publication of WO2016056205A1 publication Critical patent/WO2016056205A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • G02B21/244Devices for focusing using image analysis techniques
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems

Definitions

  • the present invention relates to an image acquisition device such as a microscope, an image acquisition method, and a program.
  • a virtual slide system which allows an improvement in efficiency of data management and remote diagnosis by acquiring a microscope image of a pathological sample such as a tissue slice extracted from a human body as a digital image, attracts attention.
  • the pathological sample as a subject of the system is a slide produced by sandwiching a tissue slice thinly sliced so as to have a size of several ⁇ m to several tens of ⁇ m between a slide glass and a cover glass and fixing the tissue slice using an encapsulant.
  • the depth of field of an objective lens commonly used in the pathological observation microscope is about 0.5 to 1 ⁇ m, and is small as compared with the thickness of the tissue slice. Accordingly, in the case where an image of the entire thickness range of the tissue slice is acquired, a cross-sectional image is acquired by repeating imaging while changing a relative position between the focus of the objective lens and the subject in an optical axis direction.
  • a microscope described in PTL 1 includes a light detection portion that estimates the surface positon of a sample and detects light from the sample over a predetermined range with the surface position used as a reference. With this, it is possible to cope with the unevenness and the undulation of the sample and acquire the cross-sectional image of the sample.
  • a microscope described in PTL 2 includes a movement mechanism that moves an imaging element such that the imaging surface of the imaging element approaches a focal plane in accordance with an inclination of a focal curved plane of an image of a sample. With this, it is possible to cope with the unevenness and the undulation of the sample and acquire a focused image of the sample.
  • the present invention has been achieved in view of the circumstances described above, and an object thereof is to provide an image acquisition device capable of acquiring the cross-sectional image of the subject having the unevenness and the undulation in the optical axis direction reliably at high speed.
  • the present invention is an image acquisition device acquiring images of a plurality of layers in a subject by dividing the subject into a plurality of sections and imaging the subject a plurality of times using an imaging unit while changing a focal position in an optical axis direction in an imaging range in the optical axis direction in each section, the image acquisition device comprising: a detection unit that detects an in-focus position in the optical axis direction at each of a plurality of detection positions of the subject; a creation unit that creates a focus map from the in-focus position at each of the detection positions detected by the detection unit; a calculation unit that calculates a range in the optical axis direction in which the in-focus position is positioned in a target section from the focus map created by the creation unit; a setting unit that sets a reference imaging range serving as a reference of the imaging range in the optical axis direction; and a determination unit that determines the imaging range in the optical axis direction for each section
  • the image acquisition device capable of acquiring the cross-sectional image of the subject having the unevenness and the undulation in the optical axis direction reliably at high speed. Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
  • FIG. 1 is a schematic view showing a system configuration of an image acquisition device of a first embodiment.
  • FIG. 2 is a flowchart showing an image acquisition process in the first embodiment.
  • FIG. 3 is a schematic view showing a calculation method of an XY presence range in the first embodiment.
  • FIGS. 4A and 4B are schematic views each showing a calculation method of an XY position at which a in-focus position is detected in the first embodiment.
  • FIG. 5 is a schematic view showing a detection method of the in-focus position in the first embodiment.
  • FIG. 6 is a schematic view showing a calculation method of the in-focus position in the first embodiment.
  • FIGS. 7A to 7C are schematic views each showing a calculation method of a focus map in the first embodiment.
  • FIG. 8 is a schematic view showing a calculation method of a Z-stack range in the first embodiment.
  • FIGS. 9A and 9B are schematic views each showing a change method of the Z-stack range in the first embodiment.
  • FIG. 10 is a schematic view showing a system configuration of an image acquisition device of a second embodiment.
  • FIG. 11 is a flowchart showing an image acquisition process in the second embodiment.
  • FIG. 12 is a schematic view showing a calculation method of a thickness of a tissue slice in the second embodiment.
  • FIG. 13 is a flowchart showing an image acquisition process in a third embodiment.
  • FIG. 1 is a schematic view showing a system configuration of an image acquisition device 1 of the present embodiment.
  • the image acquisition device 1 of the present embodiment is a device that acquires images of a plurality of layers in a subject by dividing the subject into a plurality of sections and imaging the subject a plurality of times using imaging means while changing a focal position in an optical axis direction in an imaging range in the optical axis direction in each section.
  • the image acquisition device 1 has an imaging device 100, a wide-area imaging device 200, a control portion 300, an imaging range calculation portion 400, an image processing portion 5, an image storage portion 6, and an image display portion 7.
  • the imaging device 100 is a device for imaging an enlarged image of a slide (hereinafter referred to as a specimen) 8 in which a cell or a tissue slice is encapsulated, and an example of the imaging device 100 includes what is called a digital microscope or a virtual slide scanner.
  • the specimen 8 includes a tissue slice 801 as a subject serving as an imaging target, a slide glass 802, a cover glass 803, and an encapsulant 804 for fixing the tissue slice 801.
  • the wide-area imaging device 200 is a device for imaging the entire image of the specimen 8, and the acquired image is used for production of a thumbnail image and position calculation of a small section 901 described later.
  • the control portion 300 performs control of various processes in the image acquisition device 1.
  • the control portion 300 is configured by a computer including a CPU, a memory, and a storage device, and the CPU executes and processes a stored program and the image acquisition device 1 is thereby controlled.
  • the imaging range calculation portion 400 performs a process of determining an imaging position and the number of images based on image information on the specimen 8 imaged by the imaging device 100 and the wide-area imaging device 200.
  • the image processing portion 5 performs digital correction and image synthesis of the image imaged by the imaging device 100. With regard to the digital correction, it is conceivable to use methods such as color tone correction and gamma correction based on an acquired spectral transmittance, and noise processing.
  • the image storage portion 6 performs storage and accumulation of image data imaged by the imaging device 100, and a storage device such as a hard disk can be used as the image storage portion 6.
  • the image display portion 7 is a portion for outputting, displaying, and viewing the image stored in the image storage portion 6 and, in the present embodiment, a display is shown as an example of the image display portion 7.
  • the imaging device 100 has a stage 101, an illumination portion 102, an objective lens 103, and an imaging portion 104. Note that, in the present embodiment, a description will be given on the assumption that a direction orthogonal to the optical axis direction of the objective lens 103 is an XY direction and the optical axis direction of the objective lens 103 is a Z direction, as shown in FIG. 1.
  • the imaging device 100 images a cross-sectional image of the specimen 8 in the entire presence range of the specimen 8 using the imaging portion 104 while moving the stage 101 by step movement in the Z direction. In the present embodiment, this imaging method is referred to as Z-stack
  • the stage 101 has a mechanism for holding the specimen 8 and performing position control of the specimen 8, and is capable of moving the specimen 8 in the XY direction and the Z direction.
  • a drive mechanism that uses a ball screw and a piezoelectric element is used as the stage 101.
  • the illumination portion 102 has a light source for illuminating the specimen 8 and an optical system for concentrating light onto the specimen 8.
  • a halogen lamp or an LED (Light Emitting Diode) is used as the light source of the illumination portion 102.
  • the objective lens 103 forms an image of light emitted by the illumination portion 102 and having passed through the specimen 8 on a light receiving surface of an imaging element included in the imaging portion 104.
  • a lens having a field of view on an object side of not less than 1 mm and having a depth of field corresponding to 0.5 ⁇ m is used as the objective lens 103.
  • the imaging portion 104 has the imaging element that performs photoelectric conversion and a transmission cable that performs input and output of an electrical signal.
  • the imaging element performs the photoelectric conversion of light received via the objective lens 103 and thereby outputs data on the image of the specimen 8 as an electrical signal.
  • an image sensor as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor) is used as the imaging element.
  • the wide-area imaging device 200 has a wide-area imaging portion 201 and a specimen placement portion 202.
  • the wide-area imaging portion 201 images the entire image of the specimen 8.
  • the specimen placement portion 202 is a stand for placing the specimen 8.
  • the wide-area imaging portion 201 has a camera consisting of an imaging element and a lens, and an illumination portion for illuminating the placed specimen 8. Note that, as an illumination method of the specimen 8, dark field illumination that uses ring illumination provided around the lens is performed.
  • the positions of the wide-area imaging portion 201 and the specimen placement portion 202 are adjusted such that the entire specimen 8 falls within the imaging range of the wide-area imaging portion 201.
  • the control portion 300 has a control command portion 301, a wide-area imaging control portion 302, a stage control portion 303, an illumination control portion 304, and an imaging control portion 305.
  • the control command portion 301 is a portion that collectively controls the individual control portions in the entire image acquisition device 1, and the drive timing of each control portion is controlled by the control command portion 301.
  • the wide-area imaging control portion 302 performs ON/OFF control related to the illumination portion included in the wide-area imaging portion 201 and the exposure of the imaging element. In addition, the wide-area imaging control portion 302 transmits the image information on the specimen 8 imaged by the wide-area imaging portion 201 to an XY presence range calculation portion 410 described later.
  • the stage control portion 303 performs the positon control of the specimen 8 by moving the stage 101 in the XY direction and the Z direction based on a target movement amount of the stage 101 and position information thereon inputted from the control command portion 301.
  • the illumination control portion 304 performs control of the entire illumination portion 102 such as the ON/OFF control of the light source of the illumination portion 102, diaphragm adjustment, and replacement of a color filter.
  • the imaging control portion 305 performs the ON/OFF control related to the exposure of the imaging element included in the imaging portion 104, and transmits the image information on the specimen 8 imaged by the imaging portion 104 to the image storage portion 6. Note that the stage control portion 303, the illumination control portion 304, and the imaging control portion 305 are controlled by the control command portion 301 such that the step movement and the imaging timing are synchronized.
  • the imaging range calculation portion 400 has an XY presence range calculation portion 410 and a Z imaging range calculation portion 420.
  • the Z imaging range calculation portion 420 has a in-focus position calculation portion 421, a focus map calculation portion 422, and a Z-stack range calculation portion 423.
  • the XY presence range calculation portion 410 calculates the presence range of the specimen 8 in the XY direction based on the image information on the specimen 8 transmitted from the wide-area imaging control portion 302. The range of presence of the tissue slice 801 in the specimen 8 in the XY direction is estimated and the range is calculated as the presence range, and the detail thereof will be described later.
  • the XY presence range calculation portion 410 performs a process of dividing the calculated presence range of the tissue slice 801 in the XY direction into a plurality of the small sections 901.
  • One small section 901 corresponds to an imaging range 903 of the imaging device 100 in the XY direction.
  • the tissue slice 801 is present in a large range as compared with the imaging range 903 of the imaging device 100 in the XY direction, and hence, in order to image the entire tissue slice 801, the presence range of the tissue slice 801 is divided into a plurality of the small sections 901, the small sections 901 are imaged, and the images are synthesized.
  • XY position information on each small section 901 is transmitted to the control command portion 301 from the XY presence range calculation portion 410.
  • a drive step amount and movement coordinates of the stage 101 are calculated based on the XY position information on each small section 901, and are outputted to the stage control portion 303.
  • the in-focus position calculation portion 421 calculates a Z position of the tissue slice 801, i.e., the in-focus position of the tissue slice 801 by associating the image information on the specimen 8 accumulated in the image storage portion 6 with the position information on the stage 101 in the XY and Z directions and performing an operation thereon.
  • the calculated in-focus position is associated with the XY position information on the stage 101 and is transmitted to the focus map calculation portion 422, and the detail thereof will be described later.
  • the focus map calculation portion 422 calculates and creates a focus map 904 in which in-focus position information on the tissue slice 801 is plotted three-dimensionally based on the XY position of the specimen 8 and the in-focus position information thereon transmitted from the in-focus position calculation portion 421.
  • the focus map calculation portion 422 calculates a change amount of the Z position of the focus map 904 in each small section 901 (the range in the optical axis direction where the in-focus position is positioned in a target section) based on the position information of the calculated focus map 904.
  • the details of the focus map 904 and a calculation method of the change amount of the Z position of the focus map 904 will be described later.
  • Z position information based on the calculated focus map 904 and the change amount of the Z position of the focus map 904 are transmitted to the Z-stack range calculation portion 423 together with the XY position information on the corresponding small section 901.
  • the Z-stack range calculation portion 423 calculates a Z-stack range (the imaging range in the optical axis direction) in each small section (the target section) 901 based on information on a Z-stack range 905 serving as an inputted and set reference and information inputted from the focus map calculation portion 422.
  • a Z-stack range 905 serving as an inputted and set reference and information inputted from the focus map calculation portion 422.
  • thickness information of the tissue slice 801 of the specimen 8 is used in the present embodiment. Note that, as means for inputting the thickness information of the tissue slice 801, the following two methods can be shown as examples.
  • FIG. 2 is a flowchart showing an image acquisition process by the image acquisition device 1 of the present embodiment.
  • Step S101 in FIG. 2 wide-area imaging of the specimen 8 is performed.
  • the specimen 8 is placed on the specimen placement portion 202 by a user.
  • a pushing portion is provided such that the XY position of the specimen 8 can be positioned relative to the specimen placement portion 202 when the specimen 8 is placed. It is conceivable to use a method in which the specimen 8 is transferred from a cassette in which the specimen 8 is accommodated using an automatic hand or the like and is placed instead of the manual operation by the user.
  • FIG. 1 is a flowchart showing an image acquisition process by the image acquisition device 1 of the present embodiment.
  • the wide-area imaging may also be performed in a state in which the movable range of the stage 101 is increased and the specimen 8 is placed on the stage 101. After the placement of the specimen 8, the entire specimen 8 is imaged by the wide-area imaging portion 201.
  • the specimen 8 of which the imaging is completed is collected from the specimen placement portion 202, and is then placed on and fixed to the stage 101. Similarly to the case where the specimen 8 is placed on the specimen placement portion 202, the specimen 8 is positioned on the stage 101. In addition, in order to prevent a position displacement of the specimen 8 during the movement of the stage 101, the specimen 8 is fixed using a mechanical system or a vacuum. During a time period from the placement of the specimen 8 on the stage 101 to start of the imaging by the imaging portion 104, Steps S102 and S103 in FIG. 2 are executed in parallel.
  • Step S102 the presence range of the tissue slice 801 in the XY direction is calculated by the XY presence range calculation portion 410 based on entire image information on the specimen 8 imaged in Step S101.
  • a process of determining the calculated presence range of the tissue slice 801 as the imaging range and dividing the presence range of the tissue slice 801 into a plurality of the small sections 901 is executed in this Step. The detail thereof will be described later.
  • Step S103 in FIG. 2 an XY position (detection position) 902 used when the in-focus position of the tissue slice 801 is detected is calculated by the XY presence range calculation portion 410 based on the position information on the small section 901 calculated in Step S102.
  • the XY position used when the thickness of the tissue slice 801 is detected is calculated in Step S103.
  • Step S104 in FIG. 2 the stage 101 is controlled such that the XY position 902 used when the in-focus position is detected that is calculated in Step S103 matches the imaging range 903 of the imaging device 100 in the XY direction.
  • Step S105 in FIG. 2 the image of the specimen 8 is imaged by the imaging portion 104.
  • a detection method of the in-focus position of the tissue a method in which images of a plurality of different Z positions at a given XY position are acquired and the in-focus position is estimated from the image information is used.
  • the stage 101 is controlled by the stage control portion 303 so as to be moved by a specific amount at each step by the step movement, and the acquisition of the image is performed by repeating the movement and the imaging, and the detail thereof will be described later.
  • the thickness of the encapsulant 804 used to encapsulate the tissue slice 801 in the specimen 8 is several tens to several hundreds of ⁇ m.
  • the thickness of the tissue slice 801 is about 4 to 8 ⁇ m, the range of the thickness thereof is extremely narrow, and it takes time to detect the Z position of the tissue slice 801.
  • the thickness error of the slide glass 802 in the specimen 8 is about 0.3 mm, and this thickness error is extremely large for the tissue slice 801 so that the Z position of the tissue slice 801 significantly differs from one specimen to another and it also takes time to detect the Z position.
  • the Z position of the slide glass 802 or the cover glass 803 included in the specimen 8 is measured using a laser displacement gauge or the like separately.
  • the Z position of the stage 101 is controlled based on the measurement result, and the imaging range of the imaging portion 104 in the Z direction is set at the Z position of the surface of the slide glass 802 or the back surface of the cover glass 803. With this, it is possible to reduce time required to detect the Z position of the tissue slice 801.
  • Step S106 in FIG. 2 the Z position of the tissue slice 801 is calculated as the in-focus position based on the image information imaged in Step S105 by the in-focus position calculation portion 421.
  • the in-focus position is calculated from evaluation values of contrasts of a plurality of imaged images, and the detail of a calculation method of the in-focus position will be described later.
  • Step S107 in FIG. 2 it is determined whether or not movement to the next XY position 902 at which the in-focus position of the tissue slice 801 is detected is performed. In the case where the next XY position 902 is present, the process returns to Step S104 in FIG. 2, and the position control of the stage 101 is performed.
  • Step S108 the process moves to Step S108 in FIG. 2.
  • the position control of the stage 101 in Step S110 may be performed in parallel with the process in each of Step S108 and Step S109.
  • the method in which the stage 101 is moved in the Z direction by the step movement and the imaging is performed has been described as the detection method of the Z position of the tissue slice 801, but the detection method is not limited thereto.
  • the imaging device 100 in FIG. 1 is configured to be capable of simultaneously imaging a plurality of different Z positions at a given XY position, the step movement of the stage 101 in the Z direction is not necessary.
  • the method for example, it is conceivable to use a method in which an optical path on the objective lens 103 is divided into a plurality of optical paths using a half mirror or a beam splitter, and a plurality of the imaging portions 104 are disposed such that images of different Z positions are formed in the optical paths.
  • Step S108 in FIG. 2 the focus map 904 is calculated by the focus map calculation portion 422 based on the in-focus position information on the tissue slice 801 calculated in Step S106.
  • the in-focus positions are calculated for the individual small sections 901 at a plurality of positions calculated in Step S102 in FIG. 2 and the focus map 904 of the entire tissue slice 801 is calculated by mapping the in-focus positions, and the detail thereof will be described later.
  • the change amount of the Z position of the focus map 904 in each small section 901 is calculated. This will be also described in detail later.
  • the Z-stack range in each small section 901 is calculated by the Z-stack range calculation portion 423.
  • an input value by the user or the thickness information of the tissue slice 801 is inputted to the Z-stack range calculation portion 423 via the control command portion 301 as the Z-stack range 905 serving as the reference and is set, and the detail thereof will be described later.
  • the thickness information of the tissue slice 801 calculated in the in-focus position calculation portion 421 is inputted as the Z-stack range 905 serving as the reference.
  • the Z-stack range in each small section 901 is calculated based on the Z-stack range 905 serving as the reference and the change amount of the Z position of the focus map 904 transmitted from the focus map calculation portion 422.
  • Step S110 in FIG. 2 the position control of the stage 101 is performed based on the XY position information on the small section 901 serving as the imaging target and the information of the focus map 904.
  • the position control of the stage 101 is performed by the stage control portion 303 such that the XY position of the small section 901 serving as the imaging target and the in-focus position match the imaging range 903 of the imaging device 100.
  • Step S111 in FIG. 2 the imaging by the imaging portion 104 is performed while the stage 101 is moved in the Z direction by the specific amount at each step by the step movement based on Z-stack range information on the small section 901 serving as the imaging target.
  • Step S112 in FIG. 2 it is determined whether or not movement to the next small section 901 is performed. In the case where the next small section 901 is present, the process returns to Step S110 in FIG. 2, and the position control of the stage 101 is performed for the next small section 901. In the case where the imaging in all of the small sections 901 is ended, the imaging of the specimen 8 is ended and, in the case where the next specimen 8 is present, the specimen 8 is replaced and the process of the next specimen 8 is started.
  • FIG. 3 is a schematic view showing a calculation method of the XY presence range in the image acquisition device 1 of the present embodiment.
  • FIG. 3 shows a state in which the tissue slice 801 of the specimen 8 is divided into a plurality of the small sections 901. First, the entire image of the specimen 8 is imaged by the wide-area imaging portion 201. Based on the imaged image, the presence range of the tissue slice 801 in the XY direction is calculated by the XY presence range calculation portion 410.
  • the calculation method of the presence range of the tissue slice 801 in the XY direction in the present embodiment, the following method is adopted.
  • the method is based on the evaluation value of the contrast of the image information on the specimen 8, and the range where the contrast is high is determined as the presence range of the tissue slice 801.
  • the high contrast range is determined by performing binarization on the image information on the specimen 8.
  • the cell or the tissue slice in the specimen 8 is usually dyed so as to be distinguished from the surrounding encapsulant 804. Accordingly, it is possible to determine the high contrast range as the presence range of the tissue slice 801.
  • a foreign matter or dust adheres to the specimen 8 and may be determined as the presence range of the tissue slice but, in such cases, an edge portion of the foreign matter or dust may be extracted from the image information on the entire image of the specimen 8 and a program that does not use the edge portion as the imaging target may be performed.
  • the presence range of the tissue slice 801 is divided into a plurality of the small sections 901 by the XY presence range calculation portion 410.
  • the process of dividing the presence range of the tissue slice 801 into a plurality of the small sections 901 in consideration of positioning accuracy of the stage 101 and image synthesis by stitching adjacent images, it is desirable to provide an overlapping area between adjacent images.
  • FIGS. 4A and 4B are schematic views each showing a calculation method of the XY position at which the in-focus position is detected in the image acquisition device 1 of the present embodiment.
  • Each of FIGS. 4A and 4B shows the XY position 902 at which the in-focus position of the tissue slice 801 is detected.
  • the XY position 902 at which the in-focus position is detected is calculated based on the calculated position information on the small section 901.
  • the central position of each small section 901 obtained by the division is used as the XY position 902 at which the in-focus position is detected in FIG. 4A, and an intersection point of a lattice pattern of the small sections 901 obtained by the division is used as the XY position 902 at which the in-focus position is detected in FIG. 4B.
  • the XY position 902 is not limited to two patterns shown in FIGS. 4A and 4B. It is only necessary to provide the XY position 902 at at least one position on the presence range of the tissue slice 801 in the XY direction but, when the in-focus position is detected at positions of more small sections 901, it is possible to improve the accuracy of the focus map 904 described later. However, in order to shorten time, the in-focus position of the tissue slice 801 may be detected in the small sections 901 positioned at predetermined intervals among a plurality of the small sections 901. In the case of each of FIGS. 4A and 4B, the XY position 902 of the in-focus position is determined in every other small section 901.
  • the number of pixels of the imaging element of the imaging portion 104 used in the detection of the in-focus position it is better to have many pixels from the viewpoint of accuracy, but it takes time at the time of the operation of the in-focus position described later, and hence it is desirable to limit the number of pixels to a bare minimum.
  • the method it is conceivable to use a method in which the pixel serving as the target is selected at the time of the operation of the in-focus position or a method in which the pixel is designated and imaged by ROI (Region Of Interest) imaging at the time of the detection of the in-focus position.
  • ROI Region Of Interest
  • FIG. 5 is a schematic view showing a detection method of the in-focus position in the image acquisition device 1 of the present embodiment. Based on the XY position 902 at which the in-focus position is detected calculated in each of FIGS. 4A and 4B, the detection of the in-focus position is performed. As shown in FIG. 5, in general, the imaging range 903 of the imaging device 100 in the Z direction, i.e., the depth of field is narrow as compared with the thickness of the tissue slice 801.
  • the Z position of the tissue slice 801 is calculated based on the evaluation values of the contrasts of the images, and the position is determined as the in-focus position of the tissue slice 801 is used.
  • the interval of the imaging range 903 in the Z direction when a plurality of the Z positions are imaged is preferably set to be smaller than the thickness of the tissue slice 801.
  • the assumed thickness of the tissue slice 801 is about 4 to 8 ⁇ m, and hence the step amount of the stage 101 in the Z direction is set to a value of not more than 4 to 8 ⁇ m.
  • the maximum value of the created evaluation function or a range of not less than a pre-set threshold value can be determined as the in-focus position of the tissue slice 801.
  • a variance value of the brightness of the image information is used as the evaluation value of the contrast, but the evaluation value is not limited thereto and, for example, it is conceivable to use a method in which a differential value of the image by the Brenner function is used.
  • the in-focus position of the tissue slice 801 cannot be calculated such as the case where the maximum value cannot be calculated from the evaluation function of the calculated contrast.
  • FIGS. 7A to 7C are schematic views each showing a calculation method of the focus map in the image acquisition device 1 of the present embodiment.
  • the focus map 904 indicative of the in-focus position in the entire tissue slice 801 is calculated by the focus map calculation portion 422.
  • FIG. 3 shows the positional relationship between the tissue slice 801 and the small section 901 two-dimensionally
  • FIG. 7A shows the positional relationship between the tissue slice 801 and the small section 901 three-dimensionally.
  • FIG. 7B is a schematic view showing the focus map 904 in which in-focus positions Z1 to Z12 calculated at XY positions P1 to P12 at each of which the focus positon is detected are plotted on a three-dimensional space in which the small sections 901 are shown, and are interpolated.
  • the XY positions 902 at each of which the in-focus position is detected are provided at the intersection points of the lattice pattern in the small sections, but the position of the XY position 902 is not limited thereto.
  • FIG. 7C is a schematic view showing a focus map 904A corresponding to a target small section 901A based on the calculated focus map 904.
  • the in-focus position Z45, Z47, and Z48 are calculated by interpolating the in-focus position information on adjacent XY positions. Thus, it is possible to determine the in-focus position in each small section 901 from the position information of the focus map 904.
  • FIG. 8 is a schematic view showing a calculation method of the Z-stack range in the image acquisition device 1 of the present embodiment.
  • the Z-stack range in each small section 901 is calculated and determined by the Z-stack range calculation portion 423.
  • FIG. 8 shows only the focus map 904A corresponding to the target small section 901A in FIG. 7C and, in the range of the focus map 904A, the maximum value of the Z position is indicated by the in-focus position Z48 and the minimum value of the Z position is indicated by a in-focus position Z4.
  • ⁇ Z the change amount of the Z position of the focus map 904
  • the depth of field of the imaging device 100 is determined by the performance of the objective lens 103, and hence it is possible to calculate the change amount of the number of images required for the Z-stack based on the calculated difference ⁇ Z in the Z position and information on the depth of field.
  • the Z-stack range 905 serving as the reference is already determined from the thickness information of the tissue slice 801, and hence it is only necessary to change the number of images required for the Z-stack (the number of times of the imaging) in accordance with the change amount of the Z position of the focus map 904 in each small section 901.
  • FIGS. 9A and 9B are schematic views each showing a change method of the Z-stack range in the image acquisition device 1 of the present embodiment.
  • FIG. 9A shows a state in which the imaging is performed without changing the Z-stack range in each small section 901, and corresponds to the conventional art.
  • FIG. 9B the Z-stack range is changed in accordance with the change amount of the Z position of the focus map 904 in each small section 901, and FIG. 9B corresponds to the present embodiment.
  • the Z-stack range is always constant, and hence the range of the tissue slice 801 that cannot be imaged is present.
  • the number of images required for the Z-stack is changed based on the difference ⁇ Z in the Z position of the focus map 904, and hence it is possible to image the entire presence range of the tissue slice 801 in the Z direction.
  • FIGS. 9A and 9B will be described in greater detail.
  • the imaging range 903 is shown as one layer, and the Z-stack range 905 serving as the reference is shown as a range of five layers obtained by stacking five layers of the imaging ranges 903 in the Z direction.
  • FIG. 9B shows a state in which three layers of the imaging ranges 903 are added on the upper side of the five layers of the Z-stack range 905 serving as the reference correspondingly to the difference ⁇ Z in the Z position.
  • FIG. 9B by changing the number of layers of the Z-stack range from five to eight correspondingly to the difference ⁇ Z in the Z position, it is possible to image the entire presence range of the tissue slice 801 in the Z direction.
  • the focus map 904 may be any focus map as long as the focus map indicates the in-focus position of the tissue slice 801.
  • the focus map 904 may be a focus map that detects the in-focus position of the tissue slice 801 in the central area of the tissue slice 801 in the Z direction.
  • layers corresponding to the Z-stack range 905 serving as the reference may be added on the upper side and the lower side of the layer of the imaging range 903 corresponding to the difference ⁇ Z in the Z position.
  • the number of layers of the Z-stack range 905 serving as the reference is six, three layers as the half thereof may be appropriately added on each of the upper side and the lower side of the layer of the imaging range 903 corresponding to the difference ⁇ Z in the Z position.
  • the Z-stack range 905 serving as the reference may also be determined by multiplying the thickness information of the tissue slice 801 by a predetermined coefficient.
  • the Z-stack range 905 serving as the reference and the range of the in-focus position calculated by the focus map calculation portion 422, the Z-stack range in each small section 901 is determined. With this, it becomes possible to acquire the cross-sectional image of the tissue slice 801 having unevenness and an undulation in the optical axis direction reliably at high speed.
  • FIG. 10 is a schematic view showing the system configuration of the image acquisition device 1 of the present embodiment.
  • the thickness information of the tissue slice 801 has been inputted by the user in the first embodiment but, in the case of the second embodiment, a process of detecting the thickness of the tissue slice 801 is performed in the image acquisition device 1.
  • the thickness detection of the tissue slice 801 is executed by the in-focus position calculation portion 421 by a method similar to the method of the detection of the in-focus position of the tissue slice 801.
  • the thickness information of the tissue slice 801 calculated by the in-focus position calculation portion 421 is transmitted to the Z-stack range calculation portion 423, and the thickness information is handled as the Z-stack range 905 serving as the reference.
  • FIG. 11 is a flowchart showing an image acquisition process by the image acquisition device 1 of the present embodiment.
  • Processes in Steps S101 to S104 in FIG. 11 are the same as those in Steps S101 to S104 in FIG. 2 in the first embodiment.
  • Step subsequent to Step S104 includes the process of detecting the thickness of the tissue slice 801. That is, in Step S113 in FIG. 11, it is determined whether or not the thickness of the tissue slice 801 is detected.
  • Step S113 the process proceeds to Step S114 via Step S105a, and the detection of the thickness of the tissue slice 801 is performed concurrently with the detection of the in-focus position of the tissue slice 801 and, thereafter, the process proceeds to Step S107.
  • Step S107 The details of the detection and the calculation method of the thickness of the tissue slice 801 will be described later.
  • Step S105b Step S106, and Step S107 in this order. Note that each pf processes in Step S105a and Step S105b is the same as that in Step S105 in FIG. 2.
  • the subsequent processes are the same as those in the first embodiment, and hence the description thereof will be omitted.
  • the detection and the calculation method of the thickness of the tissue slice 801 by the image acquisition device 1 of the present embodiment will be described.
  • the position XY at which the thickness of the tissue slice 801 is detected is disposed at the same position as that of the XY position 902 at which the in-focus position is detected shown in FIGS. 4A and 4B.
  • the tissue slice 801 is sliced almost equally in general, thickness unevenness in the tissue slice 801 is extremely small.
  • the required number of the XY positions at each of which the thickness is detected is small, and the thickness may be appropriately detected at at least one XY position in the presence range of the tissue slice 801. It is possible to use the detected thickness in the image acquisition process performed after the thickness of the tissue slice 801 is detected at at least one XY position in the presence range of the tissue slice 801, and hence the negative determination is made in Step S113.
  • FIG. 12 is a schematic view showing the calculation method of the thickness of the tissue slice 801 by the image acquisition device 1 of the present embodiment.
  • the thickness of the tissue slice 801 is calculated by using the evaluation value of the contrast calculated from the image information at the different Z positions by the in-focus position calculation portion 421.
  • FIG. 12 shows a state in which the evaluation function is calculated from images imaged at three different Z positions ZA, ZB, and ZC.
  • the calculation method of the thickness of the tissue slice 801 a method in which the range of not less than a pre-set threshold value in a graph in FIG. 12 is estimated as the thickness of the tissue slice 801 is used.
  • step imaging in a state in which the acquisition interval is shorter than the interval at the time of the detection of the in-focus position. With this, it is possible to perform the detection and the calculation of the thickness of the tissue slice 801 by using the image acquisition device 1.
  • the thickness of the tissue slice 801 is calculated by the in-focus position calculation portion 421 similarly to the case where the in-focus position of the tissue slice 801 is detected, but the present embodiment is not limited thereto.
  • Thickness detection means for detecting the thickness of the tissue slice 801 may also be provided additionally in the image acquisition device 1.
  • the image acquisition device 1 of the present embodiment is configured in the same manner as the image acquisition device 1 shown in FIG. 10 described in the second embodiment.
  • the thickness information of the tissue slice 801 calculated by the in-focus position calculation portion 421 has been transmitted to the Z-stack range calculation portion 423, and the Z-stack range 905 serving as the reference has been calculated.
  • a process method after the actual imaging is performed by the imaging device 100 based on the Z-stack range in each small section 901 calculated by the Z-stack range calculation portion 423 is different.
  • correction of the Z-stack range in the small section 901 that is imaged next is performed based on the image information imaged by the imaging device 100.
  • FIG. 13 is a flowchart showing an image acquisition process by the image acquisition device 1 of the present embodiment.
  • a process of calculating the focus range of the tissue slice 801 from the Z-stack image, and correcting the Z-stack range in the next small section 901 based on the focus range is performed.
  • Processes of Steps S101 to S112 in FIG. 13 are the same as those of Steps S101 to S112 in FIG. 2, and hence the description thereof will be omitted.
  • Step S115 in FIG. 13 the Z-stack is actually performed in a given small section 901, and the calculation of the focus range of the tissue slice 801 is performed from acquired image information.
  • the focus range of the tissue slice 801 i.e., the presence range of the tissue slice 801 in the Z direction is re-calculated by the in-focus position calculation portion 421 by using a method similar to the calculation method of the in-focus position of the tissue slice 801 and the thickness of the tissue slice 801 described in the first and second embodiments.
  • the Z-stack range set in Step S109 is based on the focus map 904, and hence it is feared that the Z-stack range is displaced from the range where the tissue slice 801 is actually positioned.
  • the Z-stack image is imaged by performing the step movement in the Z direction at an interval of not more than the depth of field, and hence it is possible to calculate the focus range of the tissue slice 801 with excellent accuracy.
  • focus range information on the tissue slice 801 calculated from the image information acquired by actually performing the Z-stack is reflected in the Z-stack range calculated by the Z-stack range calculation portion 423 in Step S109.
  • Step S115 a process of correcting the Z-stack range set in Step S109 is performed.
  • This correction process is a process of moving the Z-stack range set in Step S109 upward or downward and extending the Z-stack range (adding the layer of the imaging range 903).
  • the correction of the Z-stack range in the small section that is imaged next is performed in Step S115, but the present embodiment is not limited thereto, and the correction may also be performed on the small section in which the Z-stack has been actually performed.
  • the present embodiment it is possible to increase the accuracy of the Z-stack in the small section 901 that is imaged next to a level higher than those of the first and second embodiments.
  • the tissue slice 801 is bent or folded in the specimen 8.
  • the specimen 8 cannot be handled with the calculation method of the Z-stack range that uses only the change amount information of the focus map 904 described in each of the first and second embodiments.
  • by correcting the Z-stack range from the image obtained by actually performing the Z-stack it is possible to handle such an irregular specimen 8.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD) TM ), a flash memory device, a memory card, and the like.

Abstract

An image acquisition device includes a detection unit that detects an in-focus position in the optical axis direction at each of a plurality of detection positions of the subject; a creation unit that creates a focus map from the in-focus position; a calculation unit that calculates a range in the optical axis direction in which the in-focus position is positioned in a target section from the focus map; a setting unit that sets a reference imaging range serving as a reference of the imaging range in the optical axis direction; and a determination unit that determines the imaging range in the optical axis direction for each section based on the reference imaging range set by the setting unit and the range of the in-focus position calculated by the calculation unit.

Description

IMAGE ACQUISITION DEVICE, IMAGE ACQUISITION METHOD, AND PROGRAM
The present invention relates to an image acquisition device such as a microscope, an image acquisition method, and a program.
In recent years, in the field of pathological diagnosis or clinical research, a virtual slide system, which allows an improvement in efficiency of data management and remote diagnosis by acquiring a microscope image of a pathological sample such as a tissue slice extracted from a human body as a digital image, attracts attention. The pathological sample as a subject of the system is a slide produced by sandwiching a tissue slice thinly sliced so as to have a size of several μm to several tens of μm between a slide glass and a cover glass and fixing the tissue slice using an encapsulant.
On the other hand, high resolution is required of an objective lens of a pathological observation microscope, and there is a concern that the depth of field of the lens becomes shallow when the lens is designed based on a high NA (Numerical Aperture) side. The depth of field of an objective lens commonly used in the pathological observation microscope is about 0.5 to 1 μm, and is small as compared with the thickness of the tissue slice. Accordingly, in the case where an image of the entire thickness range of the tissue slice is acquired, a cross-sectional image is acquired by repeating imaging while changing a relative position between the focus of the objective lens and the subject in an optical axis direction.
In addition, the tissue slice as the subject has unevenness and an undulation on its surface, and hence it is necessary to perform imaging while constantly changing an imaging start position in the optical axis direction.
As means for implementing the above operation, a microscope described in PTL 1 includes a light detection portion that estimates the surface positon of a sample and detects light from the sample over a predetermined range with the surface position used as a reference. With this, it is possible to cope with the unevenness and the undulation of the sample and acquire the cross-sectional image of the sample.
A microscope described in PTL 2 includes a movement mechanism that moves an imaging element such that the imaging surface of the imaging element approaches a focal plane in accordance with an inclination of a focal curved plane of an image of a sample. With this, it is possible to cope with the unevenness and the undulation of the sample and acquire a focused image of the sample.
Japanese Patent Application Laid-open No. 2012-203048 Japanese Patent Application Laid-open No. 2012-108476
In order to increase the speed of image acquisition, it is conceivable to use a method in which the angle of view of the objective lens of the pathological observation microscope is widened but, with this, influences of the unevenness and the undulation of the tissue slice as the subject become more conspicuous. Accordingly, it is feared that a problem arises in that image information required in the entire thickness range of the subject is omitted or the focused image of the subject cannot be acquired.
As a method for solving the problem, it is conceivable to use a method that performs imaging while constantly changing an acquisition range of the cross-sectional image of the subject in the optical axis direction. However, in the microscope described in each of PTL 1 and 2, thickness information of the subject is not considered and it is difficult to reliably acquire the focused image required for the entire thickness range of the subject.
The present invention has been achieved in view of the circumstances described above, and an object thereof is to provide an image acquisition device capable of acquiring the cross-sectional image of the subject having the unevenness and the undulation in the optical axis direction reliably at high speed.
To achieve the above object, the present invention adopts the following configuration. In other words, the present invention is an image acquisition device acquiring images of a plurality of layers in a subject by dividing the subject into a plurality of sections and imaging the subject a plurality of times using an imaging unit while changing a focal position in an optical axis direction in an imaging range in the optical axis direction in each section, the image acquisition device comprising: a detection unit that detects an in-focus position in the optical axis direction at each of a plurality of detection positions of the subject; a creation unit that creates a focus map from the in-focus position at each of the detection positions detected by the detection unit; a calculation unit that calculates a range in the optical axis direction in which the in-focus position is positioned in a target section from the focus map created by the creation unit; a setting unit that sets a reference imaging range serving as a reference of the imaging range in the optical axis direction; and a determination unit that determines the imaging range in the optical axis direction for each section based on the reference imaging range set by the setting unit and the range of the in-focus position calculated by the calculation unit.
According to the present invention, it becomes possible to provide the image acquisition device capable of acquiring the cross-sectional image of the subject having the unevenness and the undulation in the optical axis direction reliably at high speed.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
FIG. 1 is a schematic view showing a system configuration of an image acquisition device of a first embodiment. FIG. 2 is a flowchart showing an image acquisition process in the first embodiment. FIG. 3 is a schematic view showing a calculation method of an XY presence range in the first embodiment. FIGS. 4A and 4B are schematic views each showing a calculation method of an XY position at which a in-focus position is detected in the first embodiment. FIG. 5 is a schematic view showing a detection method of the in-focus position in the first embodiment. FIG. 6 is a schematic view showing a calculation method of the in-focus position in the first embodiment. FIGS. 7A to 7C are schematic views each showing a calculation method of a focus map in the first embodiment. FIG. 8 is a schematic view showing a calculation method of a Z-stack range in the first embodiment. FIGS. 9A and 9B are schematic views each showing a change method of the Z-stack range in the first embodiment. FIG. 10 is a schematic view showing a system configuration of an image acquisition device of a second embodiment. FIG. 11 is a flowchart showing an image acquisition process in the second embodiment. FIG. 12 is a schematic view showing a calculation method of a thickness of a tissue slice in the second embodiment. FIG. 13 is a flowchart showing an image acquisition process in a third embodiment.
Hereinbelow, with reference to the drawings, embodiments of the present invention will be illustratively described in detail. It should be noted that dimensions, materials, and shapes of components described in the following embodiments and the relative positions between these components may appropriately be changed depending upon a configuration of a device to which the present invention is applied and various conditions, and the scope of the present invention is not intended to be limited to the following embodiments.
<first embodiment>
Hereinbelow, a first embodiment will be described.
(with regard to device configuration)
FIG. 1 is a schematic view showing a system configuration of an image acquisition device 1 of the present embodiment.
The image acquisition device 1 of the present embodiment is a device that acquires images of a plurality of layers in a subject by dividing the subject into a plurality of sections and imaging the subject a plurality of times using imaging means while changing a focal position in an optical axis direction in an imaging range in the optical axis direction in each section.
The image acquisition device 1 has an imaging device 100, a wide-area imaging device 200, a control portion 300, an imaging range calculation portion 400, an image processing portion 5, an image storage portion 6, and an image display portion 7.
The imaging device 100 is a device for imaging an enlarged image of a slide (hereinafter referred to as a specimen) 8 in which a cell or a tissue slice is encapsulated, and an example of the imaging device 100 includes what is called a digital microscope or a virtual slide scanner. As shown in FIG. 5, the specimen 8 includes a tissue slice 801 as a subject serving as an imaging target, a slide glass 802, a cover glass 803, and an encapsulant 804 for fixing the tissue slice 801.
The wide-area imaging device 200 is a device for imaging the entire image of the specimen 8, and the acquired image is used for production of a thumbnail image and position calculation of a small section 901 described later.
The control portion 300 performs control of various processes in the image acquisition device 1. Specifically, the control portion 300 is configured by a computer including a CPU, a memory, and a storage device, and the CPU executes and processes a stored program and the image acquisition device 1 is thereby controlled.
The imaging range calculation portion 400 performs a process of determining an imaging position and the number of images based on image information on the specimen 8 imaged by the imaging device 100 and the wide-area imaging device 200.
The image processing portion 5 performs digital correction and image synthesis of the image imaged by the imaging device 100. With regard to the digital correction, it is conceivable to use methods such as color tone correction and gamma correction based on an acquired spectral transmittance, and noise processing.
The image storage portion 6 performs storage and accumulation of image data imaged by the imaging device 100, and a storage device such as a hard disk can be used as the image storage portion 6. The image display portion 7 is a portion for outputting, displaying, and viewing the image stored in the image storage portion 6 and, in the present embodiment, a display is shown as an example of the image display portion 7.
The imaging device 100 has a stage 101, an illumination portion 102, an objective lens 103, and an imaging portion 104. Note that, in the present embodiment, a description will be given on the assumption that a direction orthogonal to the optical axis direction of the objective lens 103 is an XY direction and the optical axis direction of the objective lens 103 is a Z direction, as shown in FIG. 1. The imaging device 100 images a cross-sectional image of the specimen 8 in the entire presence range of the specimen 8 using the imaging portion 104 while moving the stage 101 by step movement in the Z direction. In the present embodiment, this imaging method is referred to as Z-stack.
The stage 101 has a mechanism for holding the specimen 8 and performing position control of the specimen 8, and is capable of moving the specimen 8 in the XY direction and the Z direction. Note that, in the present embodiment, a drive mechanism that uses a ball screw and a piezoelectric element is used as the stage 101.
The illumination portion 102 has a light source for illuminating the specimen 8 and an optical system for concentrating light onto the specimen 8. Note that, in the present embodiment, a halogen lamp or an LED (Light Emitting Diode) is used as the light source of the illumination portion 102.
The objective lens 103 forms an image of light emitted by the illumination portion 102 and having passed through the specimen 8 on a light receiving surface of an imaging element included in the imaging portion 104. Note that, in the present embodiment, a lens having a field of view on an object side of not less than 1 mm and having a depth of field corresponding to 0.5 μm is used as the objective lens 103.
The imaging portion 104 has the imaging element that performs photoelectric conversion and a transmission cable that performs input and output of an electrical signal. The imaging element performs the photoelectric conversion of light received via the objective lens 103 and thereby outputs data on the image of the specimen 8 as an electrical signal. Note that, in the present embodiment, an image sensor as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor) is used as the imaging element.
The wide-area imaging device 200 has a wide-area imaging portion 201 and a specimen placement portion 202. The wide-area imaging portion 201 images the entire image of the specimen 8. The specimen placement portion 202 is a stand for placing the specimen 8. Specifically, the wide-area imaging portion 201 has a camera consisting of an imaging element and a lens, and an illumination portion for illuminating the placed specimen 8. Note that, as an illumination method of the specimen 8, dark field illumination that uses ring illumination provided around the lens is performed. The positions of the wide-area imaging portion 201 and the specimen placement portion 202 are adjusted such that the entire specimen 8 falls within the imaging range of the wide-area imaging portion 201.
The control portion 300 has a control command portion 301, a wide-area imaging control portion 302, a stage control portion 303, an illumination control portion 304, and an imaging control portion 305.
The control command portion 301 is a portion that collectively controls the individual control portions in the entire image acquisition device 1, and the drive timing of each control portion is controlled by the control command portion 301.
The wide-area imaging control portion 302 performs ON/OFF control related to the illumination portion included in the wide-area imaging portion 201 and the exposure of the imaging element. In addition, the wide-area imaging control portion 302 transmits the image information on the specimen 8 imaged by the wide-area imaging portion 201 to an XY presence range calculation portion 410 described later.
The stage control portion 303 performs the positon control of the specimen 8 by moving the stage 101 in the XY direction and the Z direction based on a target movement amount of the stage 101 and position information thereon inputted from the control command portion 301.
The illumination control portion 304 performs control of the entire illumination portion 102 such as the ON/OFF control of the light source of the illumination portion 102, diaphragm adjustment, and replacement of a color filter.
The imaging control portion 305 performs the ON/OFF control related to the exposure of the imaging element included in the imaging portion 104, and transmits the image information on the specimen 8 imaged by the imaging portion 104 to the image storage portion 6. Note that the stage control portion 303, the illumination control portion 304, and the imaging control portion 305 are controlled by the control command portion 301 such that the step movement and the imaging timing are synchronized.
The imaging range calculation portion 400 has an XY presence range calculation portion 410 and a Z imaging range calculation portion 420. The Z imaging range calculation portion 420 has a in-focus position calculation portion 421, a focus map calculation portion 422, and a Z-stack range calculation portion 423.
The XY presence range calculation portion 410 calculates the presence range of the specimen 8 in the XY direction based on the image information on the specimen 8 transmitted from the wide-area imaging control portion 302. The range of presence of the tissue slice 801 in the specimen 8 in the XY direction is estimated and the range is calculated as the presence range, and the detail thereof will be described later.
In addition, the XY presence range calculation portion 410 performs a process of dividing the calculated presence range of the tissue slice 801 in the XY direction into a plurality of the small sections 901. One small section 901 corresponds to an imaging range 903 of the imaging device 100 in the XY direction. The tissue slice 801 is present in a large range as compared with the imaging range 903 of the imaging device 100 in the XY direction, and hence, in order to image the entire tissue slice 801, the presence range of the tissue slice 801 is divided into a plurality of the small sections 901, the small sections 901 are imaged, and the images are synthesized.
XY position information on each small section 901 is transmitted to the control command portion 301 from the XY presence range calculation portion 410. In the control command portion 301, a drive step amount and movement coordinates of the stage 101 are calculated based on the XY position information on each small section 901, and are outputted to the stage control portion 303.
The in-focus position calculation portion 421 calculates a Z position of the tissue slice 801, i.e., the in-focus position of the tissue slice 801 by associating the image information on the specimen 8 accumulated in the image storage portion 6 with the position information on the stage 101 in the XY and Z directions and performing an operation thereon. The calculated in-focus position is associated with the XY position information on the stage 101 and is transmitted to the focus map calculation portion 422, and the detail thereof will be described later.
The focus map calculation portion 422 calculates and creates a focus map 904 in which in-focus position information on the tissue slice 801 is plotted three-dimensionally based on the XY position of the specimen 8 and the in-focus position information thereon transmitted from the in-focus position calculation portion 421. In addition, the focus map calculation portion 422 calculates a change amount of the Z position of the focus map 904 in each small section 901 (the range in the optical axis direction where the in-focus position is positioned in a target section) based on the position information of the calculated focus map 904. The details of the focus map 904 and a calculation method of the change amount of the Z position of the focus map 904 will be described later. Z position information based on the calculated focus map 904 and the change amount of the Z position of the focus map 904 are transmitted to the Z-stack range calculation portion 423 together with the XY position information on the corresponding small section 901.
The Z-stack range calculation portion 423 calculates a Z-stack range (the imaging range in the optical axis direction) in each small section (the target section) 901 based on information on a Z-stack range 905 serving as an inputted and set reference and information inputted from the focus map calculation portion 422. As the information on the Z-stack range (a reference imaging range) 905 serving as the reference, thickness information of the tissue slice 801 of the specimen 8 is used in the present embodiment.
Note that, as means for inputting the thickness information of the tissue slice 801, the following two methods can be shown as examples. They are a method in which the known thickness information that is measured and detected in advance is inputted by a user(the first embodiment), and a method in which the thickness information of the tissue slice 801 is detected in the device of the image acquisition device 1 (a second embodiment), and the details thereof will be described later.
(with regard to device process)
FIG. 2 is a flowchart showing an image acquisition process by the image acquisition device 1 of the present embodiment.
In Step S101 in FIG. 2, wide-area imaging of the specimen 8 is performed. First, the specimen 8 is placed on the specimen placement portion 202 by a user. A pushing portion is provided such that the XY position of the specimen 8 can be positioned relative to the specimen placement portion 202 when the specimen 8 is placed. It is conceivable to use a method in which the specimen 8 is transferred from a cassette in which the specimen 8 is accommodated using an automatic hand or the like and is placed instead of the manual operation by the user. FIG. 1 shows a state in which the specimen 8 is placed on the specimen placement portion 202, but the wide-area imaging may also be performed in a state in which the movable range of the stage 101 is increased and the specimen 8 is placed on the stage 101. After the placement of the specimen 8, the entire specimen 8 is imaged by the wide-area imaging portion 201.
The specimen 8 of which the imaging is completed is collected from the specimen placement portion 202, and is then placed on and fixed to the stage 101. Similarly to the case where the specimen 8 is placed on the specimen placement portion 202, the specimen 8 is positioned on the stage 101. In addition, in order to prevent a position displacement of the specimen 8 during the movement of the stage 101, the specimen 8 is fixed using a mechanical system or a vacuum.
During a time period from the placement of the specimen 8 on the stage 101 to start of the imaging by the imaging portion 104, Steps S102 and S103 in FIG. 2 are executed in parallel. In Step S102, the presence range of the tissue slice 801 in the XY direction is calculated by the XY presence range calculation portion 410 based on entire image information on the specimen 8 imaged in Step S101. In addition, a process of determining the calculated presence range of the tissue slice 801 as the imaging range and dividing the presence range of the tissue slice 801 into a plurality of the small sections 901 is executed in this Step. The detail thereof will be described later.
In Step S103 in FIG. 2, an XY position (detection position) 902 used when the in-focus position of the tissue slice 801 is detected is calculated by the XY presence range calculation portion 410 based on the position information on the small section 901 calculated in Step S102. In order to increase accuracy of the focus map 904 calculated in Step S108 executed later, it is desirable to perform the in-focus position detection at many XY positions 902, and the detail thereof will be described later. In addition, also in the case where the thickness of the tissue slice 801 is detected in the second embodiment described later, the XY position used when the thickness of the tissue slice 801 is detected is calculated in Step S103.
In Step S104 in FIG. 2, the stage 101 is controlled such that the XY position 902 used when the in-focus position is detected that is calculated in Step S103 matches the imaging range 903 of the imaging device 100 in the XY direction.
In Step S105 in FIG. 2, the image of the specimen 8 is imaged by the imaging portion 104. In the present embodiment, as a detection method of the in-focus position of the tissue, a method in which images of a plurality of different Z positions at a given XY position are acquired and the in-focus position is estimated from the image information is used. The stage 101 is controlled by the stage control portion 303 so as to be moved by a specific amount at each step by the step movement, and the acquisition of the image is performed by repeating the movement and the imaging, and the detail thereof will be described later.
Incidentally, the thickness of the encapsulant 804 used to encapsulate the tissue slice 801 in the specimen 8 is several tens to several hundreds of μm. In contrast, the thickness of the tissue slice 801 is about 4 to 8 μm, the range of the thickness thereof is extremely narrow, and it takes time to detect the Z position of the tissue slice 801. On the other hand, the thickness error of the slide glass 802 in the specimen 8 is about 0.3 mm, and this thickness error is extremely large for the tissue slice 801 so that the Z position of the tissue slice 801 significantly differs from one specimen to another and it also takes time to detect the Z position. To cope with this, for example, it is preferable to use a method in which the Z position of the slide glass 802 or the cover glass 803 included in the specimen 8 is measured using a laser displacement gauge or the like separately. In this case, it is conceivable to use a method in which the Z position of the stage 101 is controlled based on the measurement result, and the imaging range of the imaging portion 104 in the Z direction is set at the Z position of the surface of the slide glass 802 or the back surface of the cover glass 803. With this, it is possible to reduce time required to detect the Z position of the tissue slice 801.
In Step S106 in FIG. 2, the Z position of the tissue slice 801 is calculated as the in-focus position based on the image information imaged in Step S105 by the in-focus position calculation portion 421. The in-focus position is calculated from evaluation values of contrasts of a plurality of imaged images, and the detail of a calculation method of the in-focus position will be described later.
In Step S107 in FIG. 2, it is determined whether or not movement to the next XY position 902 at which the in-focus position of the tissue slice 801 is detected is performed. In the case where the next XY position 902 is present, the process returns to Step S104 in FIG. 2, and the position control of the stage 101 is performed. In the case where the detection of the in-focus position of the tissue slice 801 is ended at all of the designated XY positions 902, the process moves to Step S108 in FIG. 2. The position control of the stage 101 in Step S110 may be performed in parallel with the process in each of Step S108 and Step S109.
The method in which the stage 101 is moved in the Z direction by the step movement and the imaging is performed has been described as the detection method of the Z position of the tissue slice 801, but the detection method is not limited thereto. For example, when the imaging device 100 in FIG. 1 is configured to be capable of simultaneously imaging a plurality of different Z positions at a given XY position, the step movement of the stage 101 in the Z direction is not necessary. As the method, for example, it is conceivable to use a method in which an optical path on the objective lens 103 is divided into a plurality of optical paths using a half mirror or a beam splitter, and a plurality of the imaging portions 104 are disposed such that images of different Z positions are formed in the optical paths. In addition, by comparing images of a plurality of imaging elements that image different Z positions by utilizing the above configuration, it is possible to constantly estimate the Z position of the tissue slice 801 at the XY position where the imaging is performed, i.e., the position of presence of the in-focus position. With this, for example, by scanning and imaging the specimen 8 in the XY direction and simultaneously correcting the Z position of the stage 101 while following the Z position of the tissue slice 801, it is possible to detect the Z position of the tissue slice 801 at high speed though the detection accuracy of the Z position of the tissue slice 801 may be reduced.
In Step S108 in FIG. 2, the focus map 904 is calculated by the focus map calculation portion 422 based on the in-focus position information on the tissue slice 801 calculated in Step S106. The in-focus positions are calculated for the individual small sections 901 at a plurality of positions calculated in Step S102 in FIG. 2 and the focus map 904 of the entire tissue slice 801 is calculated by mapping the in-focus positions, and the detail thereof will be described later. In addition, at the same time, in Step S108, the change amount of the Z position of the focus map 904 in each small section 901 is calculated. This will be also described in detail later.
In Step S109 in FIG. 2, the Z-stack range in each small section 901 is calculated by the Z-stack range calculation portion 423. In the present embodiment, an input value by the user or the thickness information of the tissue slice 801 is inputted to the Z-stack range calculation portion 423 via the control command portion 301 as the Z-stack range 905 serving as the reference and is set, and the detail thereof will be described later. On the other hand, in the second embodiment, the thickness information of the tissue slice 801 calculated in the in-focus position calculation portion 421 is inputted as the Z-stack range 905 serving as the reference. The Z-stack range in each small section 901 is calculated based on the Z-stack range 905 serving as the reference and the change amount of the Z position of the focus map 904 transmitted from the focus map calculation portion 422.
In Step S110 in FIG. 2, the position control of the stage 101 is performed based on the XY position information on the small section 901 serving as the imaging target and the information of the focus map 904. The position control of the stage 101 is performed by the stage control portion 303 such that the XY position of the small section 901 serving as the imaging target and the in-focus position match the imaging range 903 of the imaging device 100.
In Step S111 in FIG. 2, the imaging by the imaging portion 104 is performed while the stage 101 is moved in the Z direction by the specific amount at each step by the step movement based on Z-stack range information on the small section 901 serving as the imaging target.
After the end of the imaging, in Step S112 in FIG. 2, it is determined whether or not movement to the next small section 901 is performed. In the case where the next small section 901 is present, the process returns to Step S110 in FIG. 2, and the position control of the stage 101 is performed for the next small section 901. In the case where the imaging in all of the small sections 901 is ended, the imaging of the specimen 8 is ended and, in the case where the next specimen 8 is present, the specimen 8 is replaced and the process of the next specimen 8 is started.
(with regard to calculation method of XY presence range)
FIG. 3 is a schematic view showing a calculation method of the XY presence range in the image acquisition device 1 of the present embodiment. FIG. 3 shows a state in which the tissue slice 801 of the specimen 8 is divided into a plurality of the small sections 901.
First, the entire image of the specimen 8 is imaged by the wide-area imaging portion 201. Based on the imaged image, the presence range of the tissue slice 801 in the XY direction is calculated by the XY presence range calculation portion 410.
As the calculation method of the presence range of the tissue slice 801 in the XY direction, in the present embodiment, the following method is adopted. The method is based on the evaluation value of the contrast of the image information on the specimen 8, and the range where the contrast is high is determined as the presence range of the tissue slice 801. As a method for determining the high contrast range, the high contrast range is determined by performing binarization on the image information on the specimen 8. The cell or the tissue slice in the specimen 8 is usually dyed so as to be distinguished from the surrounding encapsulant 804. Accordingly, it is possible to determine the high contrast range as the presence range of the tissue slice 801. In some rare cases, a foreign matter or dust adheres to the specimen 8 and may be determined as the presence range of the tissue slice but, in such cases, an edge portion of the foreign matter or dust may be extracted from the image information on the entire image of the specimen 8 and a program that does not use the edge portion as the imaging target may be performed.
Next, based on the calculated presence range information on the tissue slice 801 in the XY direction, the presence range of the tissue slice 801 is divided into a plurality of the small sections 901 by the XY presence range calculation portion 410. With regard to the process of dividing the presence range of the tissue slice 801 into a plurality of the small sections 901, in consideration of positioning accuracy of the stage 101 and image synthesis by stitching adjacent images, it is desirable to provide an overlapping area between adjacent images.
(with regard to calculation method of XY position at which in-focus position is detected)
FIGS. 4A and 4B are schematic views each showing a calculation method of the XY position at which the in-focus position is detected in the image acquisition device 1 of the present embodiment. Each of FIGS. 4A and 4B shows the XY position 902 at which the in-focus position of the tissue slice 801 is detected. In both cases, the XY position 902 at which the in-focus position is detected is calculated based on the calculated position information on the small section 901.
The central position of each small section 901 obtained by the division is used as the XY position 902 at which the in-focus position is detected in FIG. 4A, and an intersection point of a lattice pattern of the small sections 901 obtained by the division is used as the XY position 902 at which the in-focus position is detected in FIG. 4B.
Herein, the XY position 902 is not limited to two patterns shown in FIGS. 4A and 4B. It is only necessary to provide the XY position 902 at at least one position on the presence range of the tissue slice 801 in the XY direction but, when the in-focus position is detected at positions of more small sections 901, it is possible to improve the accuracy of the focus map 904 described later. However, in order to shorten time, the in-focus position of the tissue slice 801 may be detected in the small sections 901 positioned at predetermined intervals among a plurality of the small sections 901. In the case of each of FIGS. 4A and 4B, the XY position 902 of the in-focus position is determined in every other small section 901.
With regard to the number of pixels of the imaging element of the imaging portion 104 used in the detection of the in-focus position, it is better to have many pixels from the viewpoint of accuracy, but it takes time at the time of the operation of the in-focus position described later, and hence it is desirable to limit the number of pixels to a bare minimum. As the method, it is conceivable to use a method in which the pixel serving as the target is selected at the time of the operation of the in-focus position or a method in which the pixel is designated and imaged by ROI (Region Of Interest) imaging at the time of the detection of the in-focus position.
(with regard to detection and calculation method of in-focus position)
FIG. 5 is a schematic view showing a detection method of the in-focus position in the image acquisition device 1 of the present embodiment. Based on the XY position 902 at which the in-focus position is detected calculated in each of FIGS. 4A and 4B, the detection of the in-focus position is performed. As shown in FIG. 5, in general, the imaging range 903 of the imaging device 100 in the Z direction, i.e., the depth of field is narrow as compared with the thickness of the tissue slice 801. Accordingly, a method in which a plurality of different Z positions are imaged by the imaging device 100, the Z position of the tissue slice 801 is calculated based on the evaluation values of the contrasts of the images, and the position is determined as the in-focus position of the tissue slice 801 is used. Note that the interval of the imaging range 903 in the Z direction when a plurality of the Z positions are imaged is preferably set to be smaller than the thickness of the tissue slice 801. The assumed thickness of the tissue slice 801 is about 4 to 8 μm, and hence the step amount of the stage 101 in the Z direction is set to a value of not more than 4 to 8 μm.
FIG. 6 is a schematic view showing the calculation method of the in-focus position in the image acquisition device 1 of the present embodiment. As the calculation method of the in-focus position of the tissue slice 801, specifically, the in-focus position is estimated by using the evaluation value of the contrast of the image obtained by imaging the specimen 8 by the in-focus position calculation portion 421.
In the present embodiment, as shown in FIG. 6, by interpolating the evaluation values of the contrasts of the individual images imaged at the different Z positions, an evaluation function having the position in the Z direction and the evaluation value of the contrast as two axes is created, and the in-focus position is estimated. FIG. 6 shows an example in which the in-focus position is calculated from images imaged at three positions ZA, ZB, and ZC as the Z positions of the stage 101. In this case, the maximum value of the created evaluation function or a range of not less than a pre-set threshold value can be determined as the in-focus position of the tissue slice 801. Note that, in the present embodiment, a variance value of the brightness of the image information is used as the evaluation value of the contrast, but the evaluation value is not limited thereto and, for example, it is conceivable to use a method in which a differential value of the image by the Brenner function is used.
There are cases where the in-focus position of the tissue slice 801 cannot be calculated such as the case where the maximum value cannot be calculated from the evaluation function of the calculated contrast. In such cases, for example, it is conceivable to use a method in which the step amount of the stage 101 in the Z direction is reduced and re-detection is performed or a method in which detection is performed on the adjacent XY position.
Note that, in the second embodiment described later, with regard to the detection and the calculation of the thickness of the tissue slice 801, the same method can be used.
(with regard to calculation method of focus map)
FIGS. 7A to 7C are schematic views each showing a calculation method of the focus map in the image acquisition device 1 of the present embodiment. Based on the in-focus position information on the tissue slice 801 calculated in FIG. 6, the focus map 904 indicative of the in-focus position in the entire tissue slice 801 is calculated by the focus map calculation portion 422.
FIG. 3 shows the positional relationship between the tissue slice 801 and the small section 901 two-dimensionally, and FIG. 7A shows the positional relationship between the tissue slice 801 and the small section 901 three-dimensionally. FIG. 7B is a schematic view showing the focus map 904 in which in-focus positions Z1 to Z12 calculated at XY positions P1 to P12 at each of which the focus positon is detected are plotted on a three-dimensional space in which the small sections 901 are shown, and are interpolated. In the case of FIG. 7B, the XY positions 902 at each of which the in-focus position is detected are provided at the intersection points of the lattice pattern in the small sections, but the position of the XY position 902 is not limited thereto.
FIG. 7C is a schematic view showing a focus map 904A corresponding to a target small section 901A based on the calculated focus map 904. With regard to in-focus positions Z45, Z47, and Z48 at XY positions P45, P47, and P48 that are not detected directly, the in-focus position Z45, Z47, and Z48 are calculated by interpolating the in-focus position information on adjacent XY positions.
Thus, it is possible to determine the in-focus position in each small section 901 from the position information of the focus map 904.
(with regard to calculation method of Z-stack range)
FIG. 8 is a schematic view showing a calculation method of the Z-stack range in the image acquisition device 1 of the present embodiment. Based on the position information of the focus map 904 corresponding to each small section 901 calculated in FIGS. 7A to 7C, the Z-stack range in each small section 901 is calculated and determined by the Z-stack range calculation portion 423. FIG. 8 shows only the focus map 904A corresponding to the target small section 901A in FIG. 7C and, in the range of the focus map 904A, the maximum value of the Z position is indicated by the in-focus position Z48 and the minimum value of the Z position is indicated by a in-focus position Z4. By calculating a difference ΔZ (the change amount of the Z position of the focus map 904) in the Z position between the in-focus positions Z48 and Z4, it is possible to calculate the change amount of the Z-stack range.
The depth of field of the imaging device 100 is determined by the performance of the objective lens 103, and hence it is possible to calculate the change amount of the number of images required for the Z-stack based on the calculated difference ΔZ in the Z position and information on the depth of field. In addition, the Z-stack range 905 serving as the reference is already determined from the thickness information of the tissue slice 801, and hence it is only necessary to change the number of images required for the Z-stack (the number of times of the imaging) in accordance with the change amount of the Z position of the focus map 904 in each small section 901.
With this, it is possible to determine the number of images required for the Z-stack for each small section 901, and hence it is possible to image the Z-stack image of the tissue slice 801 (images of a plurality of layers imaged by the Z-stack) more optimally with excellent accuracy.
FIGS. 9A and 9B are schematic views each showing a change method of the Z-stack range in the image acquisition device 1 of the present embodiment. FIG. 9A shows a state in which the imaging is performed without changing the Z-stack range in each small section 901, and corresponds to the conventional art. On the other hand, in FIG. 9B, the Z-stack range is changed in accordance with the change amount of the Z position of the focus map 904 in each small section 901, and FIG. 9B corresponds to the present embodiment.
When the undulation amount of the tissue slice 801 becomes large, in the case of FIG. 9A, the Z-stack range is always constant, and hence the range of the tissue slice 801 that cannot be imaged is present. On the other hand, in the case of FIG. 9B, the number of images required for the Z-stack is changed based on the difference ΔZ in the Z position of the focus map 904, and hence it is possible to image the entire presence range of the tissue slice 801 in the Z direction.
Herein, FIGS. 9A and 9B will be described in greater detail.
In each of FIGS. 9A and 9B, the imaging range 903 is shown as one layer, and the Z-stack range 905 serving as the reference is shown as a range of five layers obtained by stacking five layers of the imaging ranges 903 in the Z direction. FIG. 9B shows a state in which three layers of the imaging ranges 903 are added on the upper side of the five layers of the Z-stack range 905 serving as the reference correspondingly to the difference ΔZ in the Z position.
Thus, in FIG. 9B, by changing the number of layers of the Z-stack range from five to eight correspondingly to the difference ΔZ in the Z position, it is possible to image the entire presence range of the tissue slice 801 in the Z direction.
Note that, in both of the case of FIG. 9A and the case of FIG. 9B, the in-focus position on the surface of the tissue slice 801 is detected and the focus map 904 is calculated based on the detected in-focus position, but the detection of the in-focus position is not limited to the surface of the tissue slice 801. That is, the focus map 904 may be any focus map as long as the focus map indicates the in-focus position of the tissue slice 801.
For example, the focus map 904 may be a focus map that detects the in-focus position of the tissue slice 801 in the central area of the tissue slice 801 in the Z direction. In this case, layers corresponding to the Z-stack range 905 serving as the reference may be added on the upper side and the lower side of the layer of the imaging range 903 corresponding to the difference ΔZ in the Z position. For example, when it is assumed that the number of layers of the Z-stack range 905 serving as the reference is six, three layers as the half thereof may be appropriately added on each of the upper side and the lower side of the layer of the imaging range 903 corresponding to the difference ΔZ in the Z position.
In addition, the Z-stack range 905 serving as the reference may also be determined by multiplying the thickness information of the tissue slice 801 by a predetermined coefficient.
As described thus far, in the present embodiment, based on the Z-stack range 905 serving as the reference and the range of the in-focus position calculated by the focus map calculation portion 422, the Z-stack range in each small section 901 is determined. With this, it becomes possible to acquire the cross-sectional image of the tissue slice 801 having unevenness and an undulation in the optical axis direction reliably at high speed.
<second embodiment>
Hereinbelow, the second embodiment will be described. Note that the same components as those of the first embodiment are designated by the same reference numerals and the description thereof will be omitted.
(with regard to device configuration)
FIG. 10 is a schematic view showing the system configuration of the image acquisition device 1 of the present embodiment. The thickness information of the tissue slice 801 has been inputted by the user in the first embodiment but, in the case of the second embodiment, a process of detecting the thickness of the tissue slice 801 is performed in the image acquisition device 1. The thickness detection of the tissue slice 801 is executed by the in-focus position calculation portion 421 by a method similar to the method of the detection of the in-focus position of the tissue slice 801. The thickness information of the tissue slice 801 calculated by the in-focus position calculation portion 421 is transmitted to the Z-stack range calculation portion 423, and the thickness information is handled as the Z-stack range 905 serving as the reference.
(with regard to device process)
FIG. 11 is a flowchart showing an image acquisition process by the image acquisition device 1 of the present embodiment.
Processes in Steps S101 to S104 in FIG. 11 are the same as those in Steps S101 to S104 in FIG. 2 in the first embodiment. In the flowchart in FIG. 11, unlike the case of the first embodiment, Step subsequent to Step S104 includes the process of detecting the thickness of the tissue slice 801.
That is, in Step S113 in FIG. 11, it is determined whether or not the thickness of the tissue slice 801 is detected.
In the case where an affirmative determination is made in Step S113, the process proceeds to Step S114 via Step S105a, and the detection of the thickness of the tissue slice 801 is performed concurrently with the detection of the in-focus position of the tissue slice 801 and, thereafter, the process proceeds to Step S107. The details of the detection and the calculation method of the thickness of the tissue slice 801 will be described later. In the case where a negative determination is made in Step S113, the process proceeds to Step S105b, Step S106, and Step S107 in this order. Note that each pf processes in Step S105a and Step S105b is the same as that in Step S105 in FIG. 2.
The subsequent processes are the same as those in the first embodiment, and hence the description thereof will be omitted.
(with regard to detection and calculation method of thickness)
Next, the detection and the calculation method of the thickness of the tissue slice 801 by the image acquisition device 1 of the present embodiment will be described. First, the position XY at which the thickness of the tissue slice 801 is detected is disposed at the same position as that of the XY position 902 at which the in-focus position is detected shown in FIGS. 4A and 4B. However, since the tissue slice 801 is sliced almost equally in general, thickness unevenness in the tissue slice 801 is extremely small. Accordingly, as compared with the number of the XY positions 902 at each of which the in-focus position is detected, the required number of the XY positions at each of which the thickness is detected is small, and the thickness may be appropriately detected at at least one XY position in the presence range of the tissue slice 801. It is possible to use the detected thickness in the image acquisition process performed after the thickness of the tissue slice 801 is detected at at least one XY position in the presence range of the tissue slice 801, and hence the negative determination is made in Step S113.
FIG. 12 is a schematic view showing the calculation method of the thickness of the tissue slice 801 by the image acquisition device 1 of the present embodiment. Similarly to the case where the in-focus position of the tissue slice 801 is detected, the thickness of the tissue slice 801 is calculated by using the evaluation value of the contrast calculated from the image information at the different Z positions by the in-focus position calculation portion 421. Similarly to the first embodiment described by using FIG. 6, FIG. 12 shows a state in which the evaluation function is calculated from images imaged at three different Z positions ZA, ZB, and ZC. As the calculation method of the thickness of the tissue slice 801, a method in which the range of not less than a pre-set threshold value in a graph in FIG. 12 is estimated as the thickness of the tissue slice 801 is used. With regard to an acquisition interval of the images at the different Z positions used when the thickness is detected, it is preferable to perform step imaging in a state in which the acquisition interval is shorter than the interval at the time of the detection of the in-focus position. With this, it is possible to perform the detection and the calculation of the thickness of the tissue slice 801 by using the image acquisition device 1.
In the present embodiment as well, the effects similar to those described in the first embodiment are obtained.
Herein, in the present embodiment, as described above, the thickness of the tissue slice 801 is calculated by the in-focus position calculation portion 421 similarly to the case where the in-focus position of the tissue slice 801 is detected, but the present embodiment is not limited thereto. Thickness detection means for detecting the thickness of the tissue slice 801 may also be provided additionally in the image acquisition device 1.
<third embodiment>
Hereinbelow, a third embodiment will be described. Note that the same components as those of the first and second embodiments are designated by the same reference numerals and the description thereof will be omitted.
(with regard to device configuration)
The image acquisition device 1 of the present embodiment is configured in the same manner as the image acquisition device 1 shown in FIG. 10 described in the second embodiment.
In FIG. 10, in the case of the second embodiment, the thickness information of the tissue slice 801 calculated by the in-focus position calculation portion 421 has been transmitted to the Z-stack range calculation portion 423, and the Z-stack range 905 serving as the reference has been calculated.
In contrast to this, in the present embodiment, a process method after the actual imaging is performed by the imaging device 100 based on the Z-stack range in each small section 901 calculated by the Z-stack range calculation portion 423 is different. In the present embodiment, correction of the Z-stack range in the small section 901 that is imaged next is performed based on the image information imaged by the imaging device 100.
(with regard to device process and correction method of Z-stack range)
FIG. 13 is a flowchart showing an image acquisition process by the image acquisition device 1 of the present embodiment.
In the present embodiment, unlike the first and second embodiments, a process of calculating the focus range of the tissue slice 801 from the Z-stack image, and correcting the Z-stack range in the next small section 901 based on the focus range is performed. Processes of Steps S101 to S112 in FIG. 13 are the same as those of Steps S101 to S112 in FIG. 2, and hence the description thereof will be omitted.
In Step S115 in FIG. 13, the Z-stack is actually performed in a given small section 901, and the calculation of the focus range of the tissue slice 801 is performed from acquired image information. At this point, the focus range of the tissue slice 801, i.e., the presence range of the tissue slice 801 in the Z direction is re-calculated by the in-focus position calculation portion 421 by using a method similar to the calculation method of the in-focus position of the tissue slice 801 and the thickness of the tissue slice 801 described in the first and second embodiments.
Herein, the Z-stack range set in Step S109 is based on the focus map 904, and hence it is feared that the Z-stack range is displaced from the range where the tissue slice 801 is actually positioned.
In general, the Z-stack image is imaged by performing the step movement in the Z direction at an interval of not more than the depth of field, and hence it is possible to calculate the focus range of the tissue slice 801 with excellent accuracy.
In the present embodiment, focus range information on the tissue slice 801 calculated from the image information acquired by actually performing the Z-stack is reflected in the Z-stack range calculated by the Z-stack range calculation portion 423 in Step S109.
That is, in Step S115, a process of correcting the Z-stack range set in Step S109 is performed. This correction process is a process of moving the Z-stack range set in Step S109 upward or downward and extending the Z-stack range (adding the layer of the imaging range 903).
Note that, in the present embodiment, the correction of the Z-stack range in the small section that is imaged next is performed in Step S115, but the present embodiment is not limited thereto, and the correction may also be performed on the small section in which the Z-stack has been actually performed.
Thus, according to the present embodiment, it is possible to increase the accuracy of the Z-stack in the small section 901 that is imaged next to a level higher than those of the first and second embodiments.
Herein, in some rare cases, the tissue slice 801 is bent or folded in the specimen 8. In such cases, there are cases where the specimen 8 cannot be handled with the calculation method of the Z-stack range that uses only the change amount information of the focus map 904 described in each of the first and second embodiments.
As in the present embodiment, by correcting the Z-stack range from the image obtained by actually performing the Z-stack, it is possible to handle such an irregular specimen 8.
(Other Embodiments)
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2014-206493, filed on October 7, 2014, which is hereby incorporated by reference herein in its entirety.

Claims (9)

  1. An image acquisition device acquiring images of a plurality of layers in a subject by dividing the subject into a plurality of sections and imaging the subject a plurality of times using an imaging unit while changing a focal position in an optical axis direction in an imaging range in the optical axis direction in each section, the image acquisition device comprising:
    a detection unit that detects an in-focus position in the optical axis direction at each of a plurality of detection positions of the subject;
    a creation unit that creates a focus map from the in-focus position at each of the detection positions detected by the detection unit;
    a calculation unit that calculates a range in the optical axis direction in which the in-focus position is positioned in a target section from the focus map created by the creation unit;
    a setting unit that sets a reference imaging range serving as a reference of the imaging range in the optical axis direction; and
    a determination unit that determines the imaging range in the optical axis direction for each section based on the reference imaging range set by the setting unit and the range of the in-focus position calculated by the calculation unit.
  2. The image acquisition device according to claim 1, wherein
    the number of times the subject is imaged in each section is determined based on the imaging range in the optical direction determined by the determination unit and a depth of field of the imaging unit.
  3. The image acquisition device according to claim 1 or 2, wherein
    when a section next to the target section is imaged subsequently to imaging of the target section, the imaging range in the optical axis direction determined for the next section by the determination unit is corrected by using image information obtained by imaging the target section.
  4. The image acquisition device according to any one of claims 1 to 3, wherein
    the setting unit sets the reference imaging range based on thickness information of the subject.
  5. The image acquisition device according to any one of claims 1 to 4, further comprising
    a thickness detection unit that detects a thickness of the subject, wherein
    the setting unit sets the reference imaging range based on the thickness of the subject detected by the thickness detection unit.
  6. The image acquisition device according to claim 5, wherein
    the thickness detection unit detects the thickness of the subject based on information on brightness of an image imaged by the imaging unit.
  7. The image acquisition device according to any one of claims 1 to 6, wherein
    the detection unit detects the in-focus position of the subject based on information on brightness of an image imaged by the imaging unit.
  8. An image acquisition method for acquiring images of a plurality of layers in a subject by dividing the subject into a plurality of sections and imaging the subject a plurality of times using an imaging unit while changing a focal position in an optical axis direction in an imaging range in the optical axis direction in each section, the image acquisition method comprising the steps of:
    causing a computer to detect a in-focus position in the optical axis direction at each of a plurality of detection positions of the subject;
    causing the computer to create a focus map from the detected in-focus position at each of the detection positions;
    causing the computer to calculate a range in the optical axis direction in which the in-focus position is positioned in a target section from the created focus map;
    causing the computer to set a reference imaging range serving as a reference of the imaging range in the optical axis direction; and
    causing the computer to determine the imaging range in the optical axis direction for each section based on the set reference imaging range and the calculated range of the in-focus position.
  9. A program that causes a computer to execute the respective steps of the image acquisition method according to claim 8.
PCT/JP2015/004988 2014-10-07 2015-09-30 Image acquisition device, image acquisition method, and program WO2016056205A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014206493A JP2016075817A (en) 2014-10-07 2014-10-07 Image acquisition device, image acquisition method, and program
JP2014-206493 2014-10-07

Publications (1)

Publication Number Publication Date
WO2016056205A1 true WO2016056205A1 (en) 2016-04-14

Family

ID=55652842

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/004988 WO2016056205A1 (en) 2014-10-07 2015-09-30 Image acquisition device, image acquisition method, and program

Country Status (2)

Country Link
JP (1) JP2016075817A (en)
WO (1) WO2016056205A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6522535B2 (en) * 2016-02-29 2019-05-29 富士フイルム株式会社 Cell observation apparatus and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013120285A (en) * 2011-12-07 2013-06-17 Canon Inc Microscope device
JP2014089410A (en) * 2012-10-31 2014-05-15 Hamamatsu Photonics Kk Image acquisition device and image acquisition method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013120285A (en) * 2011-12-07 2013-06-17 Canon Inc Microscope device
JP2014089410A (en) * 2012-10-31 2014-05-15 Hamamatsu Photonics Kk Image acquisition device and image acquisition method

Also Published As

Publication number Publication date
JP2016075817A (en) 2016-05-12

Similar Documents

Publication Publication Date Title
US11482021B2 (en) Adaptive sensing based on depth
JP2014123070A5 (en)
JP2013517460A (en) Cell characterization using multiple focal planes
JP2014513319A (en) Automatic focusing method using liquid lens
US11029486B2 (en) Microscope and observation method
EP2572226A1 (en) Autofocus imaging
JP6799924B2 (en) Cell observation device and cell observation method
US11650405B2 (en) Microscope and method for computational microscopic layer separation
JP2009115541A (en) Distance measuring instrument and distance measuring method
JP2010101959A (en) Microscope device
US20160063307A1 (en) Image acquisition device and control method therefor
WO2015107872A1 (en) Image acquisition apparatus and control method thereof
JP2015108582A (en) Three-dimensional measurement method and device
US10921573B2 (en) Determining the arrangement of a sample object by means of angle-selective illumination
WO2016056205A1 (en) Image acquisition device, image acquisition method, and program
WO2016031214A1 (en) Image acquisition apparatus and control method thereof
JP2009109682A (en) Automatic focus adjusting device and automatic focus adjusting method
JP7118776B2 (en) IMAGING DEVICE, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM AND RECORDING MEDIUM
CN109656011A (en) The method of the stack of digit microscope and the micro-image for collecting sample
JP2020193820A (en) Measurement device, imaging device, control method, and program
US20130162801A1 (en) Microscope
EP3816698B1 (en) Digital pathology scanner for large-area microscopic imaging
JP2009222449A (en) Distance measuring device using lens system
JP2015203727A (en) Image-capturing device and image-capturing method
KR101269128B1 (en) Surface roughness measurement apparatus and method having intermediate view generator

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15848769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15848769

Country of ref document: EP

Kind code of ref document: A1