WO2021193325A1 - Microscope system, imaging method, and imaging device - Google Patents

Microscope system, imaging method, and imaging device Download PDF

Info

Publication number
WO2021193325A1
WO2021193325A1 PCT/JP2021/010981 JP2021010981W WO2021193325A1 WO 2021193325 A1 WO2021193325 A1 WO 2021193325A1 JP 2021010981 W JP2021010981 W JP 2021010981W WO 2021193325 A1 WO2021193325 A1 WO 2021193325A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
image
imaging
phase difference
objective lens
Prior art date
Application number
PCT/JP2021/010981
Other languages
French (fr)
Japanese (ja)
Inventor
和田 成司
植田 充紀
健 松井
寛和 辰田
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021193325A1 publication Critical patent/WO2021193325A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene

Definitions

  • the present disclosure relates to a microscope system, an imaging method, and an imaging device.
  • a technique for obtaining an image of a sample by irradiating the sample with light and receiving the light emitted from the sample is disclosed.
  • a technique for obtaining a captured image focused on a sample by combining with an autofocus system is disclosed.
  • a technique of identifying an optimum focal length from a captured image captured by an imager arranged at an angle, a technique of obtaining a Z-stack image for prefocus, and the like are disclosed.
  • the sample to be measured may be arranged at an angle with respect to an orthogonal plane orthogonal to the optical axis.
  • the present disclosure proposes a microscope system, an imaging method, and an imaging device capable of obtaining a highly accurate image captured at each position of the sample.
  • one form of the microscope system supports an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a sample, and is perpendicular to the first direction.
  • the stage that can move in the second direction, the phase difference acquisition unit that acquires the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination, and the optical system or stage including the objective lens are vibrated.
  • a focal position control unit that moves the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction, an amplitude of the vibration based on the phase difference, and the imaging.
  • a calculation unit for calculating at least one of the imaging conditions by the unit is provided.
  • FIG. 1 is a schematic view showing an example of the microscope system 1 of the present embodiment.
  • the microscope system 1 is a system that irradiates the sample T with line illumination LA and receives the light emitted from the sample T. Details of the line illumination LA and the sample T will be described later.
  • the microscope system 1 includes an imaging device 12.
  • the image pickup device 12 is communicably connected to the server device 10 via, for example, a wireless communication network such as network N or a wired communication network.
  • the server device 10 may be a computer.
  • the direction in which the objective lens 22 and the sample T, which will be described later, are approaching each other and away from each other will be referred to as a Z-axis direction.
  • the Z-axis direction will be described as being coincident with the thickness direction of the sample T.
  • the case where the Z-axis direction and the optical axis A2 of the objective lens 22 are parallel will be described.
  • the stage 26 described later is assumed to be a two-dimensional plane represented by two axes (X-axis direction and Y-axis direction) orthogonal to the Z-axis direction.
  • a plane parallel to the two-dimensional plane of the stage 26 may be referred to as an XY plane. Details of each of these parts will be described later.
  • the imaging device 12 includes a measuring unit 14 and a control device 16.
  • the measuring unit 14 and the control device 16 are connected so as to be able to exchange data or signals.
  • the measuring unit 14 has an optical mechanism for measuring the light emitted from the measurement target region 24.
  • the measuring unit 14 is applied to, for example, an optical microscope.
  • the measurement unit 14 includes an irradiation unit 18, a split mirror 20, an objective lens 22, a stage 26, a half mirror 28, an imaging optical unit 30, a focus detection unit 36, a first drive unit 44, and a second.
  • a drive unit 46 is provided.
  • the irradiation unit 18 irradiates the line illumination LA parallel to the first direction.
  • Line illumination LA is light having a long line shape in the first direction.
  • the line illumination LA means that the length of the luminous flux in the first direction in the two-dimensional plane orthogonal to the optical axis is several times (for example, 100 times or more) with respect to the direction orthogonal to the first direction. ) It is a light of a length longer than that.
  • the first direction which is the longitudinal direction of the line illumination LA, coincides with the X-axis direction in FIG. 1 will be described as an example.
  • the irradiation unit 18 includes a light source unit 18A and an imaging optical system 18D.
  • the light source unit 18A includes a light source 18B and a collimator lens 18C.
  • the light source 18B is a light source that emits a line-shaped line illumination LA.
  • the light source 18B irradiates the line illumination LA by condensing light in only one direction using a cylindrical lens (not shown) arranged in the optical path.
  • the light source 18B may irradiate the sample T with the line illumination LA by irradiating the sample T with light through a slit long in the X-axis direction.
  • the line illumination LA emitted from the light source 18B reaches the split mirror 20 via the imaging optical system 18D after being made into substantially parallel light by the collimator lens 18C.
  • the line shape indicates the shape of the illumination light that the line illumination LA emitted from the light source 18B irradiates the sample T. Specifically, the line shape indicates the shape of the cross section of the line illumination LA irradiated from the light source 18B, which is orthogonal to the optical axis A1.
  • the optical axis A1 indicates an optical axis from the light source 18B to the split mirror 20. In other words, the optical axis A1 is the optical axis of the collimator lens 18C and the imaging optical system 18D.
  • the light source 18B may be a light source 18B that selectively irradiates light in a wavelength region in which the sample T fluoresces. Further, the irradiation unit 18 may be provided with a filter that selectively transmits light in the wavelength region. In the present embodiment, the mode in which the light source 18B irradiates the line illumination LA in the wavelength region where the sample T fluoresces will be described as an example.
  • the split mirror 20 reflects the line illumination LA and transmits light in a wavelength region other than the line illumination LA.
  • the split mirror 20 selects a half mirror or a dichroic mirror according to the measurement target. In the present embodiment, the light emitted from the measurement target region 24 is transmitted.
  • the line illumination LA is reflected by the split mirror 20 and reaches the objective lens 22.
  • the objective lens 22 is a focus lens that focuses the line illumination LA on the measurement target area 24.
  • the objective lens 22 is a lens for irradiating the measurement target area 24 with the line illumination LA by condensing the line illumination LA on the measurement target area 24.
  • the objective lens 22 is provided with a second drive unit 46.
  • the second drive unit 46 moves the objective lens 22 in the Z-axis direction.
  • the Z-axis direction corresponds to the third direction and coincides with the optical axis direction.
  • the first drive unit 44 is provided on the stage 26 on which the measurement target region 24 is placed.
  • the stage 26 supports the sample T and can move in the second direction (Y-axis direction) perpendicular to the first direction.
  • the first drive unit 44 moves the stage 26 in the Z-axis direction.
  • the measurement target area 24 placed on the stage 26 moves in the direction toward or away from the objective lens 22.
  • the focus of the objective lens 22 is adjusted (details will be described later).
  • the first drive unit 44 moves the stage 26 in at least one of the Y-axis direction and the X-axis direction. With the movement of the stage 26, the measurement target area 24 placed on the stage 26 is moved relative to the objective lens 22 in the Y-axis direction or the X-axis direction.
  • the Y-axis direction and the X-axis direction are directions orthogonal to the Z-axis direction.
  • the Y-axis direction and the X-axis direction are directions orthogonal to each other.
  • the measuring unit 14 may be configured to include at least one of the first driving unit 44 and the second driving unit 46, and is not limited to the configuration including both of them.
  • the Z-axis direction will be described as being coincident with the thickness direction of the measurement target region 24. That is, the Z-axis direction will be described as being coincident with the thickness direction of the sample T supported on the stage 26. Further, in the present embodiment, the case where the Z-axis direction and the optical axis A2 of the objective lens 22 are parallel will be described. Further, it is assumed that the stage 26 is a two-dimensional plane represented by two axes (X-axis direction and Y-axis direction) orthogonal to the Z-axis direction. A plane parallel to the two-dimensional plane of the stage 26 may be referred to as an XY plane.
  • the longitudinal direction of the line illumination LA when emitted from the light source 18B coincides with the X-axis direction.
  • the longitudinal direction of the line illumination LA may not match the X-axis direction.
  • the longitudinal direction of the line illumination LA when emitted from the light source 18B may be a direction that coincides with the Y-axis direction.
  • the measurement target area 24 includes the sample T.
  • the measurement target region 24 is a region between a pair of glass members and is a region including a sample T.
  • the glass member is, for example, a slide glass.
  • the glass member is sometimes referred to as a cover glass.
  • the glass member may be any member as long as the sample T can be placed on the glass member, and the glass member is not limited to the member made of glass.
  • the glass member may be a member that transmits light emitted from the line illumination LA and the sample T.
  • Specimen T is a measurement target in the microscope system 1. That is, the sample T is an object for which an image captured by the microscope system 1 is obtained. In this embodiment, the sample T will be described as an example in which fluorescence is emitted by irradiation with line illumination LA.
  • Specimen T is, for example, microorganisms, cells, liposomes, erythrocytes, leukocytes, platelets, vascular endothelial cells in blood, microcell fragments of epithelial tissue, and histopathological tissue sections of various organs.
  • the sample T may be an object such as a cell labeled with a fluorescent dye that fluoresces when irradiated with line illumination LA.
  • the sample T is not limited to a substance that fluoresces when irradiated with line illumination LA.
  • the sample T may be one that emits light in a wavelength region other than fluorescence by irradiation with line illumination LA, or may be one that scatters and reflects the illumination light.
  • the fluorescence emitted by the sample T due to the irradiation of the line illumination LA may be simply referred to as light.
  • the sample T in a state of being enclosed by the encapsulant may be placed in the measurement target area 24.
  • the encapsulating material a known material that transmits each of the line illumination LA incident on the measurement target region 24 and the light emitted by the sample T may be used.
  • the encapsulant may be either liquid or solid.
  • the measurement target region 24 may have a configuration in which the sample T is placed on the glass member, and is not limited to the configuration in which the sample T is placed between the pair of glass members. Further, the sample T may be placed in the measurement target area 24 in a state of being sealed by the sealing material.
  • the light emitted from the sample T by the line illumination LA passes through the objective lens 22 and the split mirror 20, and in the present embodiment, the dichroic mirror and reaches the half mirror 28.
  • the light emitted from the sample T is, for example, the fluorescence emitted by the sample T by irradiation with the line illumination LA. Fluorescence includes scattered fluorescent components.
  • the half mirror 28 distributes a part of the light emitted from the sample T to the imaging optical unit 30, and distributes the rest to the focus detection unit 36.
  • the distribution ratio of light to the imaging optical unit 30 and the focus detection unit 36 by the half mirror 28 may be the same ratio (for example, 50% or 50%) or may be different ratios. Therefore, instead of the half mirror 28, a dichroic mirror or a polarizing mirror may be used.
  • the light transmitted through the half mirror 28 reaches the imaging optical unit 30.
  • the light reflected by the half mirror 28 reaches the focus detection unit 36.
  • the line illumination LA created by the irradiation unit 18 and the measurement target area 24 are optically conjugated. Further, it is assumed that the line illumination LA, the measurement target area 24, the image pickup unit 34 of the image pickup optical unit 30, and the pupil division image image pickup unit 42 of the focus detection unit 36 are optically conjugated.
  • the imaging optical unit 30 includes an imaging lens 32 and an imaging unit 34.
  • the light transmitted through the half mirror 28 is focused on the imaging unit 34 by the imaging lens 32.
  • the imaging unit 34 receives the light emitted from the sample T and outputs the captured image of the received light to the control device 16.
  • the captured image is used for analysis of the type of sample T and the like.
  • the imaging unit 34 may be, for example, a CMOS (Complementary Metal-Oxide Semiconductor) image sensor, a CCD (Charge Coupled Device) image sensor, or the like.
  • the imaging unit 34 may include, for example, a plurality of light receiving units 33. In that case, each light receiving unit 33 may have, for example, a configuration in which photodiodes are arranged two-dimensionally or one-dimensionally.
  • the focus detection unit 36 is an optical unit for obtaining the phase difference of the image of the light emitted from the sample T irradiated with the line illumination LA.
  • the focus detection unit 36 is an optical unit for obtaining the phase difference of an image in which the pupil is divided into two by using two separator lenses will be described as an example.
  • the focus detection unit 36 includes a field lens 38, an aperture mask 39, a separator lens 40 including a separator lens 40A and a separator lens 40B, and a pupil split image imaging unit 42.
  • the separator lens 40 includes a separator lens 40A and a separator lens 40B.
  • the aperture mask 39 has a pair of openings 39A and 39B at a target position with the optical axis of the field lens 38 as a boundary. The size of these pair of openings 39A and 39B is adjusted so that the depth of field of the separator lens 40A and the separator lens 40B is wider than the depth of field of the objective lens 22.
  • the diaphragm mask 39 divides the light incident from the field lens 38 into two luminous fluxes by a pair of openings 39A and 39B.
  • the separator lens 40A and the separator lens 40B collect the light flux transmitted through each of the openings 39A and 39B of the aperture mask 39 on the pupil split image imaging unit 42, respectively. Therefore, the pupil-divided image imaging unit 42 receives the two divided light fluxes.
  • the focus detection unit 36 may not be provided with the aperture mask 39.
  • the light that reaches the separator lens 40 via the field lens 38 is divided into two light fluxes by the separator lens 40A and the separator lens 40B, and is focused on the pupil-divided image imaging unit 42.
  • the pupil split image imaging unit 42 includes a plurality of light receiving units 41.
  • the pupil division image imaging unit 42 has a configuration in which a plurality of light receiving units 41 are two-dimensionally arranged along the light receiving surface.
  • the light receiving surface of the light receiving unit 41 is a two-dimensional plane orthogonal to the optical axis of the light incident on the pupil split image imaging unit 42 via the field lens 38, the aperture mask 39, and the separator lens 40.
  • the pupil division image imaging unit 42 is, for example, a CMOS image sensor, a CCD image sensor, or the like.
  • the pupil-divided image imaging unit 42 receives two light fluxes divided by the two pupils (separator lens 40A and separator lens 40B). By receiving two light fluxes, the pupil division image imaging unit 42 captures an image composed of a set of light flux images and outputs the image to the control device 16. There is a phase difference between the pair of images formed through the divided pupil (separator lens 40A) and the pupil (separator lens 40B) because the optical paths are different.
  • this set of images will be referred to as a pupil split image.
  • FIG. 2 is a schematic view showing an example of the pupil division image 70 acquired by the pupil division image imaging unit 42.
  • the pupil-split image 70 includes a set of image 72A and a pupil-split image 72, which is an image 72B.
  • the pupil-divided image 70 is an image corresponding to the position and brightness of the light received by each of the plurality of light-receiving units 41 provided in the pupil-divided image imaging unit 42.
  • the brightness of the light received by the light receiving unit may be described as a light intensity value.
  • the light receiving unit 41 is provided for each one or a plurality of pixels.
  • the pupil division image 70 is an image in which the light intensity value is defined for each pixel corresponding to each of the plurality of light receiving units 41.
  • the light intensity value corresponds to the pixel value.
  • the image 72A and the image 72B included in the pupil division image 70 are light receiving regions, and are regions having a larger light intensity value than other regions. Further, as described above, the irradiation unit 18 irradiates the sample T with the line illumination LA. Therefore, the light emitted from the sample T irradiated with the line illumination LA becomes line-shaped light. Therefore, the images 72A and 72B constituting the pupil division image 72 are long line-shaped images in a predetermined direction. This predetermined direction is a direction that optically corresponds to the X-axis direction, which is the longitudinal direction of the line illumination LA.
  • the vertical axis direction (YA axis direction) of the pupil split image 70 shown in FIG. 2 optically corresponds to the Y axis direction in the measurement target region 24.
  • the horizontal axis direction (XA axis direction) of the pupil division image 70 shown in FIG. 2 optically corresponds to the X axis direction in the measurement target region 24.
  • the X-axis direction is the longitudinal direction of the line illumination LA.
  • the depth direction (ZA-axis direction) of the pupil-divided image 70 shown in FIG. 2 optically corresponds to the Z-axis direction, which is the thickness direction of the measurement target region 24.
  • the focus detection unit 36 outputs the pupil division image 70 to the control device 16.
  • the focus detection unit 36 may be an optical unit for obtaining the movement of the pupil division image 72 (image 72A, image 72B) having a phase difference, and the pupil division image 72 (image 72A, image 72B) having a phase difference may be used. ) Is not limited to the bicular pupil split image.
  • the focus detection unit 36 may be, for example, an optical unit that obtains a pupil division image of three or more eyes that divides the light emitted from the sample T into three or more light fluxes and receives the light.
  • the measuring unit 14 drives the stage 26 on which the sample T is placed by the first driving unit 44, and moves the measurement target region 24 relative to the line illumination LA in the Y-axis direction while moving the measurement target region 24 relative to the line illumination LA.
  • Specimen T is irradiated with line illumination LA. That is, in the present embodiment, the Y-axis direction is the scanning direction of the measurement target region 24.
  • the scanning method of the line illumination LA is not limited.
  • the scanning method is a method of scanning along a direction (Y-axis direction) orthogonal to the longitudinal direction (X-axis direction) of the line illumination LA, and at least a part of the configuration other than the measurement target area 24 in the measurement unit 14 is a measurement target area.
  • a deflection mirror may be arranged between the split mirror 20 and the objective lens 22, and the line illumination LA may be scanned by the deflection mirror.
  • control device 16 will be described.
  • the control device 16 is an example of an information processing device.
  • the control device 16 controls at least one of the vibration amplitude and the imaging condition by the imaging unit 34 based on the phase difference (details will be described later).
  • the vibration means the vibration of the optical system including the objective lens 22 or the stage 26 in the measuring unit 14.
  • the vibration means, in detail, wobbling due to the vibration of the optical system or the stage 26.
  • Wobbling means moving the focal point of the objective lens 22 along the optical axis A2 direction (third direction).
  • the control device 16 acquires an captured image by performing imaging while wobbling.
  • control device 16 and each of the light source 18B, the image pickup unit 34, the focus detection unit 36, the first drive unit 44, and the second drive unit 46 are connected so as to be able to exchange data or signals.
  • FIG. 3 is a diagram showing an example of the functional configuration of the control device 16. Note that FIG. 3 also shows a light source 18B, a pupil split image imaging unit 42, an imaging unit 34, a first drive unit 44, and a second drive unit 46 for the sake of explanation.
  • the control device 16 includes a control unit 60, a storage unit 62, and a communication unit 64.
  • the control unit 60, the storage unit 62, and the communication unit 64 are connected to each other so that data or signals can be exchanged.
  • the storage unit 62 is a storage medium for storing various types of data.
  • the storage unit 62 is, for example, a hard disk drive or an external memory.
  • the communication unit 64 communicates with an external device such as the server device 10 via the network N or the like.
  • the control unit 60 includes a light source control unit 60A, a phase difference acquisition unit 60B, a calculation unit 60C, a focus position control unit 60G, an image capture image acquisition unit 60H, and an output control unit 60I.
  • the calculation unit 60C includes a first calculation unit 60D and a second calculation unit 60E.
  • the program may be executed by a processing device such as a CPU (Central Processing Unit), that is, it may be realized by software, it may be realized by hardware such as an IC (Integrated Circuit), or it may be realized by software and It may be realized by using hardware together.
  • a processing device such as a CPU (Central Processing Unit)
  • CPU Central Processing Unit
  • IC Integrated Circuit
  • the light source control unit 60A controls the light source 18B so as to irradiate the line illumination LA.
  • the line illumination LA is irradiated from the light source 18B under the control of the light source control unit 60A.
  • the phase difference acquisition unit 60B acquires the phase difference of the light images (image 72A, image 72B) emitted from the sample T irradiated with the line illumination LA. Specifically, the phase difference acquisition unit 60B acquires the pupil division image 70 from the pupil division image imaging unit 42, so that the pupil division image 72 of the light images (images 72A and 72B) included in the pupil division image 70 is obtained. Is acquired as the phase difference.
  • the focus position control unit 60G drives at least one of the optical systems included in the measurement unit 14.
  • the focal position control unit 60G drives at least one of the optical systems provided in the measuring unit 14 by driving and controlling at least one of the driving units for driving each of the optical systems.
  • the focal position control unit 60G vibrates (that is, wobbling) the optical system including the objective lens 22 or the stage 26 to move the focal point of the objective lens 22 in the Z-axis direction.
  • a mode in which the focal position control unit 60G moves the focus of the objective lens 22 in the Z-axis direction by controlling the second drive unit 46 will be described as an example.
  • the focal position control unit 60G moves at least one of the optical system and the stage 26 to move the focal point of the objective lens 22 in the Y-axis direction or the X-axis direction.
  • the focal position control unit 60G moves the stage 26 in the Y-axis direction or the X-axis direction by controlling the first drive unit 44. A mode in which the focal point of the objective lens 22 is moved in the Y-axis direction or the X-axis direction by this control will be described as an example.
  • the sample T may be arranged at an angle with respect to the XY plane, which is an orthogonal plane orthogonal to the optical axis A2.
  • FIG. 4 is a schematic view showing an example of a case where the sample T is arranged at an angle with respect to the XY plane.
  • the sample T may be arranged at an angle with respect to the XY plane in the measurement target region 24.
  • the thickness of the sample T may not be constant.
  • the distance between each position specified by the X-axis direction and the Y-axis direction of the sample T and the objective lens 22 in the optical axis A2 direction may not be the same for each position but may include different distances. be.
  • the position specified by the X-axis direction and the Y-axis direction is a position on the XY plane specified by the two-dimensional coordinates in the X-axis direction and the Y-axis direction.
  • control unit 60 of the present embodiment includes a calculation unit 60C.
  • the calculation unit 60C calculates at least one of the amplitude of the vibration of the optical system or the stage 26 and the image pickup condition by the image pickup unit 34 based on the phase difference acquired by the phase difference acquisition unit 60B.
  • the imaging condition by the imaging unit 34 includes the number of samplings in the Z-axis direction.
  • the number of samplings in the Z-axis direction means the number of images taken per cycle of vibration, which is wobbling.
  • the number of samples means the number of divisions when the measurement target area 24 is divided in the thickness direction (Z-axis direction) of the sample T included in the measurement target area 24 and imaged.
  • the number of samplings in the Z-axis direction may be referred to as the number of Z stacks. In the present embodiment, the number of samples may be referred to as the number of Z stacks.
  • Vibration amplitude means wobbling amplitude.
  • the amplitude of vibration corresponds to the Z stack spacing.
  • the amplitude may be referred to as a Z stack interval.
  • the imaging condition may include the amount of movement of the focal point in the X-axis direction which is the first direction or the Y-axis direction which is the second direction. This amount of movement corresponds to the amount of movement of the imaging range by the imaging unit 34 in the Y-axis direction or the X-axis direction each time.
  • the calculation unit 60C calculates the line center of gravity distribution based on the phase difference acquired by the phase difference acquisition unit 60B, the Z stack interval which is the amplitude of vibration, and the Z which is the imaging condition by the imaging unit 34. Calculate at least one of the number of stacks.
  • the line center of gravity distribution means the distribution of the center of gravity of the light intensity distribution in the line direction of each of the images 72A and 72B constituting the pupil division image 72.
  • FIG. 5 is an explanatory diagram of the line center of gravity distribution g.
  • the image 72A is shown as an example.
  • the line center of gravity distribution g is the distribution in the line direction of the center of gravity of the light distribution in the YA axis direction in the image 72A, which is a line-shaped light long in the XA axis direction.
  • the image 72A has a long line shape in the XA axis direction. Therefore, in the pupil division image 70, the line center of gravity distribution g is represented by a line along the XA axis direction, which is the longitudinal direction of the image 72A.
  • the calculation unit 60C may specify the line center of gravity distribution g of either one of the image 72A and the image 72B constituting the pupil division image 72.
  • the calculation unit 60C may preset which of the image 72A and the image 72B is to be used, and specify the line center of gravity distribution g of the preset image 72A or 72B.
  • FIG. 6 is a schematic view showing an example of the pupil split image 70. It is assumed that the sample T is arranged at an angle with respect to the XY plane. In this case, the light with respect to the line center of gravity distribution g (line center of gravity distribution ga, line center of gravity distribution gb) of each of the images 72A and 72B constituting the pupil division image 72 of the sample T (see FIG. 4) inclined with respect to the XY plane. As shown in FIG. 6, the intensity distribution shows different distributions in the YA axis direction along the XA axis direction.
  • the first calculation unit 60D of the calculation unit 60C calculates the number of Z stacks based on the line center of gravity distribution of the image 72A or the image 72B.
  • a mode in which the first calculation unit 60D calculates the number of Z stacks based on the light intensity distribution with respect to the line center of gravity distribution ga, which is the line center of gravity distribution g of the image 72A will be described as an example.
  • the first calculation unit 60D may calculate the number of Z stacks based on the light intensity distribution with respect to the line center of gravity distribution gb, which is the line center of gravity distribution g of the image 72B.
  • the first calculation unit 60D adds 1 to the number of regions in the light intensity distribution of the image 72A that are divided by the line center of gravity distribution g and have peaks that are different from each other in distance from the line center of gravity distribution g. , Calculated as the number of Z stacks.
  • 7A to 7C are explanatory views for calculating the number of Z stacks.
  • 7A to 7C show the pupil division image 70C to the pupil division image 70E, respectively.
  • the pupil-divided image 70C to the pupil-divided image 70E are examples of the pupil-divided image 70.
  • the image 72A constituting the pupil division image 72 is represented by a linear straight line intersecting the line center of gravity distribution g of the image 72A at one intersection I.
  • the light intensity distribution of the image 72A is divided into two regions EA and EB by the line center of gravity distribution g.
  • the peak P in the light intensity distribution means a point at which the distance from the line center of gravity distribution g at each position of the light intensity distribution in the XA axis direction changes from rising to falling or falling to rising in the YA axis direction.
  • the first calculation unit 60D is the number of regions (regions EA and EB) having different peaks P (peaks P1 and P2).
  • "3" which is the sum of "1” and "1”, is calculated as the number of Z stacks.
  • the image 72A constituting the pupil division image 72 is represented by a waveform that intersects the line center of gravity distribution g of the image 72A at three intersections I.
  • the light intensity distribution of the image 72A is divided into four regions EC, region ED, region EE, and region EF by the line center of gravity distribution g.
  • the peak P3 of the region EC and the peak P5 of the region EF have the same distance from the line center of gravity distribution g.
  • the peak P3 of the region EC, the peak P4 of the region ED, and the peak P6 of the region EF are different in distance from the line center of gravity distribution g.
  • the first calculation unit 60D has regions (region EC, region ED, and region EF) having different peaks (peak P3, peak P4, peak P6).
  • "4" which is the number obtained by adding "1” to "3", which is the number of Z stacks, is calculated as the number of Z stacks.
  • the image 72A constituting the pupil division image 72 is represented by a waveform that returns to the line center of gravity distribution g after intersecting the line center of gravity distribution g of the image 72A at one intersection I. do.
  • the light intensity distribution of the image 72A is divided into two regions EG and EH by the line center of gravity distribution g.
  • the peak P7 of the region EG and the peak P8 of the region EH are different in distance from the line center of gravity distribution g.
  • the first calculation unit 60D is “2”, which is the number of regions (regions EG and EH) having different peaks (peaks P7 and P8).
  • the second calculation unit 60E of the calculation unit 60C calculates the Z stack interval.
  • the second calculation unit 60E calculates the Z stack interval using the pupil division image 72 included in the pupil division image 70.
  • FIG. 8 is a diagram 76 showing an example of the relationship between the focal length and the distance between the image 72A and the image 72B.
  • FIG. 9 is a schematic view of an example of the pupil split image 70F.
  • the pupil division image 70F is an example of the pupil division image 70.
  • the distance L between the image 72A and the image 72B constituting the pupil division image 72 included in the pupil division image 70F is proportional to the focal length between the objective lens 22 and the sample T.
  • the distance L between the image 72A and the image 72B is represented by the distance between the line center of gravity distribution ga which is the line center of gravity distribution g of the image 72A and the line center of gravity distribution gb which is the line center of gravity distribution g of the image 72B.
  • the second calculation unit 60E calculates the distance L between the image 72A and the image 72B constituting the pupil division image 72.
  • the second calculation unit 60E calculates the interval L using the following equations (1) to (3).
  • [Ytt, Ytb] means the range R of the light intensity distribution of the image 72A in the YA axis direction, as shown in FIG.
  • Ytt means the upper end R1 of the light intensity distribution of the image 72A in the YA axis direction.
  • Ytb means the lower end R2 of the light intensity distribution of the image 72A in the YA axis direction.
  • W means the pixel width of the pupil-divided image 70.
  • the pixel width means the width of one pixel in the X-axis direction or the Y-axis direction of the imaging range of the pupil-divided image 70 in the measurement target area 24. In the present embodiment, it is assumed that the width of one pixel in the X-axis direction and the Y-axis direction of the imaging range of the pupil-divided image 70 is the same.
  • a black is the black level average pixel value of the region other than the light receiving region of the pupil division image 72 in the pupil division image 70.
  • a def is a noise level in the region other than the light receiving region of the pupil division image 72 in the pupil division image 70.
  • the second calculation unit 60E calculates the range R of the image 72A using the above formula (1). Further, the first calculation unit 60D calculates the range R of the image 72B in the same manner as the image 72A.
  • the second calculation unit 60E calculates the line center of gravity distribution g of the image 72A using the equation (2).
  • Ytc shows the line center of gravity distribution g of image 72A.
  • the second calculation unit 60E calculates the line center of gravity distribution g of the image 72B in the same manner as in the image 72A.
  • the second calculation unit 60E calculates the interval L between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B using the equation (3).
  • Y phase indicates the distance L between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B.
  • Ybc shows the line center of gravity distribution g of the image 72B.
  • Ytc shows the line center of gravity distribution g of image 72A.
  • the second calculation unit 60E sets the distance between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B in the longitudinal direction (XA axis direction) of the pupil division image 72. It is calculated for each divided area divided along the line.
  • FIG. 10 is an explanatory diagram for calculating the interval L for each divided region.
  • FIG. 10 shows a pupil split image 70G as an example.
  • the pupil division image 70G is an example of the pupil division image 70.
  • the first calculation unit 60D calculates "3" as the number of Z stacks.
  • the second calculation unit 60E divides the pupil division image 72 included in the pupil division image 70 into three division regions E (division areas E1 to division) along the XA axis direction which is the longitudinal direction of the pupil division image 72. Divide into area E3). Then, the second calculation unit 60E calculates the distance L between the images 72A and the image 72B constituting the pupil division image 72 for each division region E by using the above equations (1) to (3). do.
  • the second calculation unit 60E has the interval L1 between the line center of gravity distribution ga1 and the line center of gravity distribution gb1 for the divided region E1, and the line center of gravity distribution ga2 and the line center of gravity distribution for the divided region E2.
  • the interval L3 between the line center of gravity distribution ga3 and the line center of gravity distribution gb3 is calculated as the interval L.
  • the second calculation unit 60E calculates the Z stack interval based on the interval L.
  • the second calculation unit 60E calculates the focus adjustment amount corresponding to the difference between the interval L and the reference interval as the Z stack interval.
  • the reference distance is the distance L between the image 72A and the image 72B when the focal point of the objective lens 22 is in focus on the sample T.
  • the reference interval may be measured in advance using a known contrast method or the like.
  • the second calculation unit 60E has a relationship between the difference between the interval L and the reference interval and the amount of focus adjustment for focusing on the sample T when the images 72A and 72B of the interval L are obtained.
  • the relational information representing the above is calculated in advance and stored in the storage unit 62.
  • the second calculation unit 60E calculates the focus adjustment amount by using the difference between the interval L and the reference interval and the relational information.
  • the second calculation unit 60E may calculate the calculated focus adjustment amount as the Z stack interval.
  • the second calculation unit 60E may calculate the Z stack interval for each division area E based on each interval L of the division area E.
  • the focal position control unit 60G drives at least one of the optical system including the objective lens 22 and the objective lens 22 according to the number of Z stacks and the Z stack interval calculated by the calculation unit 60C.
  • the focus position control unit 60G drives and controls the second drive unit 46 that drives the objective lens 22, which is a focus lens that focuses the line illumination LA on the measurement target area 24.
  • the focal position control unit 60G moves the focal point of the objective lens 22 in the Z-axis direction.
  • the focal position control unit 60G drives and controls the second drive unit 46 to move the number of steps represented by the number of Z stacks and the position of the objective lens 22 in the Z-axis direction toward the stage 26. Move in stages.
  • the focus position control unit 60G may move the position of the objective lens 22 in the Z-axis direction by driving and controlling at least one of the second drive unit 46 and the first drive unit 44.
  • the Z stack interval per step may be a predetermined amount.
  • the focus position control unit 60G may use the Z stack interval calculated by the calculation unit 60C as the Z stack interval per step.
  • the form in which the focus position control unit 60G uses the Z stack interval calculated by the calculation unit 60C will be described as an example.
  • the focus position control unit 60G may use the Z stack interval corresponding to each division area E as the Z stack interval per step. good.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T from the imaging unit 34 for each wobbling step by the focus position control unit 60G.
  • the processing by the focus position control unit 60G and the captured image acquisition unit 60H will be described in detail.
  • FIG. 11 is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 11 shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26.
  • FIG. 11 shows a case where the number of Z stacks is “3” as an example.
  • FIG. 11 shows a form in which different Z stack intervals ( ⁇ Z1 to ⁇ Z3) are calculated as Z stack intervals corresponding to each of the three divided regions E.
  • the focal position control unit 60G moves the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 three times, which is the number of Z stacks, in the thickness direction (see focal position FP1 to focal position FP3). ). Then, each time the focal position control unit 60G moves the focal position FP three times, which is the number of Z stacks, the imaging range S is set to the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24. Move towards.
  • the imaging range S means the imaging range of the imaging unit 34.
  • the focal position control unit 60G moves the imaging range S in the X-axis direction.
  • the focal position control unit 60G drives and controls the first drive unit 44 to move the image pickup range S of the image pickup unit 34 along the X-axis direction by the width W of the image pickup range S.
  • the width W has the same meaning as the pixel width described above.
  • the focus position control unit 60G moves the focus position FP three times, which is the number of Z stacks, in the thickness direction of the measurement target area 24.
  • the focal position control unit 60G moves the imaging range S of the imaging unit 34 in the thickness direction of the measurement target region 24.
  • the focal position control unit 60G moves the imaging range S in the X-axis direction by the width W of the imaging range S each time the focal position FP is moved three times in the thickness direction.
  • the focus position control unit 60G repeatedly executes these series of processes.
  • the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ⁇ Z1 for the divided region E1. Further, the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ⁇ Z2 for the divided region E2. Further, the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ⁇ Z3 for the divided region E3.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. Move to the position in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
  • the focal position control unit 60G sets the imaging range S along the X-axis direction each time the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 is moved once in the thickness direction.
  • the width W of the above may be moved by the movement amount (W / N) obtained by dividing by the number of Z stacks. W indicates the width W, and N indicates the number of Z stacks.
  • the first calculation unit 60D may calculate the movement amount of the focal point, that is, the movement amount of the imaging range S by dividing the width W by the number of Z stacks calculated by the above process.
  • FIG. 12A is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 12A shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 12A shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the X-axis direction.
  • the focal position control unit 60G moves the imaging range S by a distance W / N along the X-axis direction each time the focal position FP is moved in the thickness direction once.
  • the focal position control unit 60G executes this series of processes three times, which is the number of Z stacks, from the surface of the measurement target region 24 toward the stage 26. Then, each time the focal position control unit 60G moves the focal position FP three times in the thickness direction, the imaging range S is returned to the surface side of the measurement target region 24, and the imaging range S is set to a distance W along the X-axis direction. / N Move. Then, the focus position control unit 60G repeatedly executes these series of processes.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. 12A. Move to the position in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
  • the focal position control unit 60G moves the focal point of the objective lens 22 from one end side to the other end side along the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24.
  • the focal position FP may be moved in the thickness direction (Z-axis direction) of the measurement target region 24.
  • FIG. 12B is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 12B shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 12B shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the X-axis direction. W and N are the same as above.
  • the focal position control unit 60G moves the imaging range S by the width W along the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24. Then, each time the focus position control unit 60G moves the imaging range S from one end to the other end of the measurement target area 24 in the X-axis direction, the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target area 24 by a Z stack interval. In addition, a series of processes for moving the imaging range S by a distance W / N in the X-axis direction is executed three times, which is the number of Z stacks.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 to the positions of the imaging range S1 to S9 in FIG. 12B. Move in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
  • the focal position control unit 60G moves the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 once in the thickness direction along the Y-axis direction intersecting the X-axis direction.
  • the imaging range S may be moved by a distance (W / N) obtained by dividing the width W of the imaging range S by the number of Z stacks.
  • the width W in this case may be the width in the Y-axis direction in the imaging range S.
  • the Y-axis direction corresponds to the scanning direction of the line illumination LA.
  • FIG. 13A is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 13A shows an example of a YZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 13A shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the Y-axis direction.
  • the focal position control unit 60G moves the imaging range S by a distance W / N along the Y-axis direction each time the focal position FP is moved in the thickness direction once.
  • the focal position control unit 60G executes this series of processes three times, which is the number of Z stacks, from the surface of the measurement target region 24 toward the stage 26. Then, each time the focal position control unit 60G moves the focal position FP three times in the thickness direction, the imaging range S is returned to the surface side of the measurement target region 24, and the imaging range S is set to a distance W along the Y-axis direction. / N Move. Then, the focus position control unit 60G repeatedly executes these series of processes.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. 13A. Move to the position in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
  • the focal position control unit 60G focuses each time the focal position of the objective lens 22 is moved from one end side to the other end side along the Y-axis direction, which is the scanning direction of the line illumination LA, in the measurement target area 24.
  • the position FP may be moved in the thickness direction (Z-axis direction) of the measurement target region 24.
  • FIG. 13B is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 13B shows an example of a YZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 13B shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the Y-axis direction. W and N are the same as above.
  • the focal position control unit 60G moves the imaging range S by the width W along the Y-axis direction, which is the scanning direction of the line illumination LA in the measurement target area 24. Then, each time the focus position control unit 60G moves the imaging range S from one end to the other end of the measurement target area 24 in the Y-axis direction, the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target area 24 by a Z stack interval. In addition, a series of processes for moving the imaging range S by a distance W / N in the Y-axis direction is executed three times, which is the number of Z stacks.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 to the positions of the imaging range S1 to S9 in FIG. 13B. Move in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24. Twice
  • the captured image acquisition unit 60H acquires captured images of a plurality of imaging ranges S (imaging range S1 to imaging range S9) having different focal positions FP for one measurement target region 24.
  • the imaging range S1 to the imaging range S9 is an imaging range S in which at least a part of at least one of the thickness direction (Z-axis direction) of the measurement target region 24 and the direction along the XY plane is non-overlapping with each other. Further, each of the captured images in the imaging range S1 to S9 is an captured image captured with each imaging range S as the focal position FP. Therefore, as shown in FIGS. 11 to 13B, even when the sample T is arranged at an angle with respect to the XY plane which is an orthogonal plane orthogonal to the optical axis A2, the captured image acquisition unit 60H Can obtain an image taken by focusing on each position of the sample T.
  • the output control unit 60I outputs the captured image acquired by the captured image acquisition unit 60H to an external device such as the server device 10 via the communication unit 64.
  • the output control unit 60I may store the captured image acquired by the captured image acquisition unit 60H in the storage unit 62. Further, the output control unit 60I may output the captured image to the display connected to the control unit 60.
  • the output control unit 60I may analyze the type of the sample T and the like by analyzing the captured image acquired by the captured image acquisition unit 60H by a known method, and output the analysis result to the server device 10 or the like. ..
  • FIG. 14 is a flowchart showing an example of the flow of information processing executed by the control device 16.
  • the light source control unit 60A controls the light source 18B so as to irradiate the line illumination LA (step S100).
  • the line illumination LA is irradiated from the light source 18B, and the line illumination LA is irradiated to the sample T.
  • the phase difference acquisition unit 60B acquires the pupil division image 72 of the light emitted from the sample T irradiated with the line illumination LA from the pupil division image imaging unit 42 (step S102). By the process of step S102, the phase difference acquisition unit 60B acquires the image 72A and the image 72B, which are the pupil division images 72, as the phase difference.
  • the first calculation unit 60D calculates the number of Z stacks based on the line center of gravity distribution g of the pupil division image 72 acquired in step S102 (step S104).
  • the second calculation unit 60E calculates the distance L between the image 72A and the image 72B constituting the pupil division image 72 acquired in step S102 (step S106).
  • the second calculation unit 60E calculates the Z stack interval based on the interval L calculated in step S106 (step S108).
  • the focus position control unit 60G executes the focus position control (step S110). Specifically, the focus position control unit 60G drives and controls the focus position FP once in the thickness direction (Z-axis direction) of the measurement target region 24 or in the direction along the XY plane. When the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target region 24, the focus position control unit 60G moves the Z stack interval and the focus position FP calculated in step S108 in the thickness direction. Further, as described with reference to FIGS. 11 to 13B, the focal position control unit 60G sets the focal position FP (imaging range S) in the thickness direction (Z-axis direction) of the measurement target region 24 or in the direction along the XY plane. Move once to.
  • the focal position control unit 60G uses the Z stack spacing corresponding to each of the division regions E calculated in step S108, and uses the Z stack spacing of the corresponding division region E to focus the objective lens 22 in the Z axis direction.
  • the position FP may be moved.
  • the focus position control unit 60G may use the number of Z stacks calculated in step S104 for all the divided regions E of the measurement target region 24.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T from the imaging unit 34 each time the focal position control unit 60G moves the focal position FP (imaging range S) (step S112).
  • the output control unit 60I stores the captured image acquired in step S112 in the storage unit 62 (step S114).
  • the control unit 60 determines whether or not to end the process (step S116). For example, the control unit 60 determines whether or not the measurement target region 24 has acquired the captured images of the respective imaging ranges S1 to S9 of the focal position FP1 to the focal position FP9 described with reference to FIGS. 11 to 13B. to decide. Then, when the control unit 60 determines that all of these captured images have been acquired, it determines affirmatively (step S116: Yes) and ends this routine. On the other hand, when the control unit 60 determines negatively (step S116: No), the control unit 60 returns to the step S110.
  • the microscope system 1 of the present embodiment includes an imaging unit 34, an irradiation unit 18, a stage 26, a phase difference acquisition unit 60B, a focal position control unit 60G, and a calculation unit 60C. ..
  • the irradiation unit 18 irradiates the line illumination LA parallel to the first direction (X-axis direction).
  • the stage 26 supports the sample T and can move in the second direction (Y-axis direction) perpendicular to the first direction.
  • the phase difference acquisition unit 60B acquires the phase difference of the light images (image 72A, image 72B) emitted from the sample T by being irradiated with the line illumination LA.
  • the focal position control unit 60G vibrates the optical system including the objective lens 22 or the stage 26 to move the focal point of the objective lens 22 in the third direction (Z-axis direction) which is vertical in each of the first direction and the second direction. ..
  • the calculation unit 60C calculates at least one of the vibration amplitude and the image pickup condition by the image pickup unit 34 based on the phase difference.
  • the sample T may be arranged at an angle with respect to the XY plane, which is an orthogonal plane orthogonal to the optical axis A2.
  • the thickness of the sample T may not be constant.
  • a technique of identifying an optimum focal length from an image captured by an imager arranged at an angle is disclosed. Further, as a conventional technique, for example, a technique for obtaining a Z-stack image while maintaining a constant difference from the focusing depth for prefocus is disclosed.
  • imaging focused on each position of the sample T has not been performed, that is, imaging focused on each position of the sample T in the X-axis direction and the Y-axis direction has been performed.
  • conventionally, in wideband microscopic photography such as focusing on the entire visible light or blue and observing with infrared light, when the wavelength of the light used for focusing and the observation wavelength are different, the shooting wavelength is determined by the chromatic aberration of the lens. Sometimes it was out of focus.
  • the amplitude of vibration of the optical system including the objective lens 22 or the stage 26 is based on the phase difference of the image of the light emitted from the sample T by being irradiated with the line illumination LA. , And at least one of the imaging conditions by the imaging unit 34 is calculated.
  • the microscope system 1 of the present embodiment it is possible to obtain an captured image focused at each position in the thickness direction of the sample T.
  • the microscope system 1 of the present embodiment can obtain a highly accurate captured image focused on each position of the sample T.
  • the imaging conditions including the number of Z stacks, which is the number of samples in the third direction (Z-axis direction), are calculated based on the line center of gravity distribution g of the pupil division image 72. Then, in the microscope system 1, the focal position FP is moved according to the calculated imaging conditions, and each time the focal position FP is moved, an captured image of the light emitted from the sample T is acquired. Therefore, in the present embodiment, focusing and imaging can be performed alternately, and an captured image in which the chromatic aberration of the lens is canceled can be obtained.
  • the microscope system 1 of the present embodiment can obtain a clear captured image in which chromatic aberration is suppressed.
  • the microscope system 1 of the present embodiment uses line illumination LA as the illumination to be irradiated at the time of wobbling. Therefore, the irradiation time of the light on the measurement target region 24 can be shortened as compared with the case where the line-shaped illumination is not used. Therefore, in addition to the above effects, the microscope system 1 of the present embodiment can suppress the fading of the sample T contained in the measurement target region 24.
  • the focal position control unit 60G drives and controls at least one of the first drive unit 44 and the second drive unit 46 that move the relative positions of the objective lens 22 and the measurement target area 24.
  • the mode of moving the focal position FP has been described.
  • the focal position control unit 60G may move the focal position FP by driving and controlling a member that changes the optical path length between the objective lens 22 and the measurement target region 24.
  • FIG. 15 is a schematic view showing an example of the microscope system 1B of this modified example.
  • the microscope system 1B has the same configuration as the microscope system 1 except that it further includes a third drive unit 48 and a variable member 50.
  • the variable member 50 is a member for changing the optical path length between the objective lens 22 that concentrates the line illumination LA on the measurement target area 24 and the measurement target area 24.
  • the variable member 50 is arranged, for example, between the objective lens 22 and the measurement target region 24.
  • FIG. 16A to 16C are schematic views showing an example of the variable member 50.
  • FIG. 16A is an example of a side view of the variable member 50.
  • FIG. 16B is an example of a front view of the variable member 50.
  • FIG. 16C is an example of a perspective view of the variable member 50.
  • the variable member 50 is a disk-shaped member rotatably supported with the rotation center C as the rotation axis.
  • the variable member 50 may be made of a material that transmits line illumination LA.
  • the variable member 50 is made of, for example, optical glass.
  • the variable member 50 includes a plurality of regions 52 (regions 52A to 52C) having different thicknesses along the circumferential direction.
  • the rotation center C of the variable member 50 is connected to the third drive unit 48. By driving the third drive unit 48, the variable member 50 rotates in the direction of arrow Q with the rotation center C as the rotation axis.
  • the rotation of the variable member 50 changes the region 52 (regions 52A to 52C) through which the line illumination LA passes, so that the optical path length changes.
  • the third drive unit 48 is electrically connected to the control device 16.
  • the focal position control unit 60G of the control device 16 drives and controls the third drive unit 48 to rotate the variable member 50 and change the optical path length between the objective lens 22 and the measurement target region 24.
  • the focal position control unit 60G may move the focal position FP.
  • FIG. 17 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of the control device 16 according to the above embodiment and the modified example.
  • the computer 1000 has a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600. Each part of the computer 1000 is connected by a bus 1050.
  • the CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes a process corresponding to the program.
  • the ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program depending on the hardware of the computer 1000, and the like.
  • BIOS Basic Input Output System
  • the HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program.
  • the HDD 1400 is a recording medium for recording the focus adjustment program according to the present disclosure, which is an example of the program data 1450.
  • the communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet).
  • the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
  • the input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000.
  • the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media).
  • the media includes, for example, an optical recording medium such as a DVD (Digital Paris Disc), a PD (Phase change rewritable Disc), a magneto-optical recording medium such as an MO (Magnet-Optical disc), a tape medium, a magnetic recording medium, a semiconductor memory, or the like.
  • an optical recording medium such as a DVD (Digital entirely Disc), a PD (Phase change rewritable Disc), a magneto-optical recording medium such as an MO (Magnet-Optical disc), a tape medium, a magnetic recording medium, a semiconductor memory, or the like.
  • the CPU 1100 of the computer 1000 executes the program loaded on the RAM 1200, thereby executing the light source control unit 60A, the phase difference acquisition unit 60B, and the calculation unit. Functions such as 60C, calculation unit 60C, first calculation unit 60D, second calculation unit 60E, focus position control unit 60G, captured image acquisition unit 60H, and output control unit 60I are realized.
  • the program and data related to the present disclosure are stored in the HDD 1400.
  • the CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program, but as another example, these programs may be acquired from another device via the external network 1550.
  • the present technology can also have the following configurations.
  • Imaging unit and An irradiation unit that irradiates line illumination parallel to the first direction, A stage that supports the sample and can move in the second direction perpendicular to the first direction, A phase difference acquisition unit that acquires the phase difference of an image of light emitted from a sample by being irradiated with the line illumination, and a phase difference acquisition unit.
  • a focal position control unit that vibrates an optical system or stage including an objective lens to move the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction.
  • a calculation unit that calculates at least one of the vibration amplitude and the imaging condition by the imaging unit based on the phase difference.
  • Microscope system with
  • the calculation unit The line center of gravity distribution is calculated based on the phase difference, and at least one of the vibration amplitude and the imaging condition by the imaging unit is calculated.
  • (3) The phase difference acquisition unit The microscope system according to (1) or (2), wherein a pupil-divided image of an image of light emitted from the sample is acquired as the phase difference.
  • (4) The calculation unit Based on the phase difference, the imaging condition including the number of samplings in the third direction is calculated.
  • the calculation unit The number obtained by adding 1 to the number of regions in the image having peaks divided by the line center of gravity distribution and having different distances from the line center of gravity distribution is derived as the sampling number.
  • the calculation unit The amplitude is calculated for each divided region based on the phase difference for each divided region obtained by dividing the pupil division image of the image of light emitted from the sample into the number of samples along the longitudinal direction.
  • the microscope system according to (4).
  • the focal position control unit is At least one of the optical system including the objective lens and the stage is moved to move the focal point of the objective lens in the first direction or the second direction.
  • the calculation unit Based on the phase difference, the imaging condition including the amount of movement of the focal point in the first direction or the second direction is calculated.
  • the microscope system according to any one of (1) to (6).
  • the focal position control unit is Each time the optical system and at least one of the stages are moved to move the focal point of the objective lens in the third direction, the focal point of the objective lens is moved in the first direction or the second direction.
  • the microscope system according to any one of (1) to (7).
  • the focal position control unit is Each time the optical system and at least one of the stages are moved to move the focal point of the objective lens in the first direction or the second direction from one end side to the other end side of the sample, the third direction is obtained.
  • the focal position control unit is The focal point of the objective lens is moved by driving and controlling a variable member that changes the optical path length between the objective lens and the sample.
  • the microscope system according to any one of (1) to (7).
  • (11) A computer that controls an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a stage that supports a sample and can move in a second direction perpendicular to the first direction. but, The step of acquiring the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination, and A step of vibrating an optical system or a stage including an objective lens to move the focal point of the objective lens in a third direction vertical to each of the first direction and the second direction. A step of calculating at least one of the amplitude of the vibration and the imaging condition by the imaging unit based on the phase difference. Imaging method including.
  • An imaging device including a measuring unit and software used to control the operation of the measuring unit.
  • the software is installed in the imaging device and The measuring unit
  • the software includes an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a stage that supports a sample and can move in a second direction perpendicular to the first direction.
  • the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination is acquired, and the phase difference is obtained.
  • the optical system or stage including the objective lens is vibrated to move the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction. Based on the phase difference, at least one of the vibration amplitude and the imaging condition by the imaging unit is calculated.
  • Imaging device is configured to calculate the imaging device.
  • Microscope system 12 Imaging device 18 Irradiation unit 22 Objective lens 24 Measurement target area 34 Imaging unit 42 Eye split image imaging unit 44 1st drive unit 46 2nd drive unit 48 3rd drive unit 50 Variable member 60B Phase difference acquisition unit 60C Calculation Unit 60D 1st calculation unit 60E 2nd calculation unit 60G Focus position control unit 60H Captured image acquisition unit T Specimen

Abstract

This microscope system includes an imaging unit, an irradiation unit, a stage, a phase difference acquisition unit, a focal point position control unit, and a calculation unit. The irradiation unit emits line illumination that is parallel to a first direction. The stage supports a specimen, and can move in a second direction that is perpendicular to the first direction. The phase difference acquisition unit emits line illumination to acquire a phase difference of an image of light emitted from the specimen. The focal point position control unit causes an optical system including an objective lens or the stage to vibrate to move the focal point of the objective lens in a third direction that is vertical respectively to the first direction and the second direction. The calculation unit calculates at least one of an amplitude of the vibration and an imaging condition of the imaging unit on the basis of the phase difference.

Description

顕微鏡システム、撮像方法、および撮像装置Microscope system, imaging method, and imaging device
 本開示は、顕微鏡システム、撮像方法、および撮像装置に関する。 The present disclosure relates to a microscope system, an imaging method, and an imaging device.
 検体へ光を照射し、検体から発せられた光を受光することで、検体の撮像画像を得る技術が開示されている。例えば、オートフォーカスシステムと組み合わせることで、検体に焦点を合わせた撮像画像を得る技術が開示されている。また、斜めに配置されたイメージャによって撮像され撮像画像から最適な焦点距離を特定する技術、および、プレフォーカスのためにZスタック画像を得る技術、などが開示されている。 A technique for obtaining an image of a sample by irradiating the sample with light and receiving the light emitted from the sample is disclosed. For example, a technique for obtaining a captured image focused on a sample by combining with an autofocus system is disclosed. Further, a technique of identifying an optimum focal length from a captured image captured by an imager arranged at an angle, a technique of obtaining a Z-stack image for prefocus, and the like are disclosed.
特許5809248号公報Japanese Patent No. 5809248 特許5829621号公報Japanese Patent No. 5829621 特許5923026号公報Japanese Patent No. 5923526 特許6023012号公報Japanese Patent No. 6023012
 ここで、測定対象である検体が光軸に直交する直交平面に対して傾斜して配置されている場合がある。この場合、従来では、検体の各位置に合焦された撮像画像を得る事は困難であった。また、従来では、可視光全体または青色で焦点合わせ(フォーカシング)をして赤外光で観察する等の広帯域の顕微鏡撮影において、フォーカシングに用いる光の波長と観察波長が異なる場合に、レンズの色収差により撮影波長で焦点が合っていない場合があった。 Here, the sample to be measured may be arranged at an angle with respect to an orthogonal plane orthogonal to the optical axis. In this case, conventionally, it has been difficult to obtain an captured image focused on each position of the sample. In addition, conventionally, in wideband microscopic photography such as focusing (focusing) on the entire visible light or blue and observing with infrared light, chromatic aberration of the lens is observed when the wavelength of the light used for focusing and the observation wavelength are different. In some cases, the shooting wavelength was out of focus.
 そこで、本開示では、検体の各位置に合焦された高精度な撮像画像を得ることができる、顕微鏡システム、撮像方法、および撮像装置を提案する。 Therefore, the present disclosure proposes a microscope system, an imaging method, and an imaging device capable of obtaining a highly accurate image captured at each position of the sample.
 上記の課題を解決するために、本開示に係る一形態の顕微鏡システムは、撮像部と、第1方向に平行なライン照明を照射する照射部と、検体を支持し、前記第1方向と垂直な第2方向に移動可能なステージと、前記ライン照明を照射されることにより検体から発せられた光の像の位相差を取得する位相差取得部と、対物レンズを含む光学系またはステージを振動させて、前記第1方向および前記第2方向それぞれに鉛直な第3方向に前記対物レンズの焦点を移動させる焦点位置制御部と、前記位相差に基づいて、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出する算出部と、を備える。 In order to solve the above problems, one form of the microscope system according to the present disclosure supports an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a sample, and is perpendicular to the first direction. The stage that can move in the second direction, the phase difference acquisition unit that acquires the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination, and the optical system or stage including the objective lens are vibrated. A focal position control unit that moves the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction, an amplitude of the vibration based on the phase difference, and the imaging. A calculation unit for calculating at least one of the imaging conditions by the unit is provided.
本開示の実施形態に係る顕微鏡システムの一例を示す模式図である。It is a schematic diagram which shows an example of the microscope system which concerns on embodiment of this disclosure. 本開示の実施形態に係る瞳分割画像の一例を示す模式図である。It is a schematic diagram which shows an example of the pupil division image which concerns on embodiment of this disclosure. 本開示の実施形態に係る制御装置の機能的構成の一例を示す図である。It is a figure which shows an example of the functional structure of the control device which concerns on embodiment of this disclosure. 本開示の実施形態に係る検体の一例を示す模式図である。It is a schematic diagram which shows an example of the sample which concerns on embodiment of this disclosure. 本開示の実施形態に係るライン重心分布の説明図である。It is explanatory drawing of the line center of gravity distribution which concerns on embodiment of this disclosure. 本開示の実施形態に係る瞳分割画像の一例を示す模式図である。It is a schematic diagram which shows an example of the pupil division image which concerns on embodiment of this disclosure. 本開示の実施形態に係るZスタック数の算出の説明図である。It is explanatory drawing of calculation of the number of Z stacks which concerns on embodiment of this disclosure. 本開示の実施形態に係るZスタック数の算出の説明図である。It is explanatory drawing of calculation of the number of Z stacks which concerns on embodiment of this disclosure. 本開示の実施形態に係るZスタック数の算出の説明図である。It is explanatory drawing of calculation of the number of Z stacks which concerns on embodiment of this disclosure. 本開示の実施形態に係る焦点距離と間隔との関係の一例を示す線図である。It is a diagram which shows an example of the relationship between the focal length and the interval which concerns on embodiment of this disclosure. 本開示の実施形態に係る瞳分割画像の一例の模式図である。It is a schematic diagram of an example of the pupil division image which concerns on embodiment of this disclosure. 本開示の実施形態に係る間隔の算出の説明図である。It is explanatory drawing of calculation of the interval which concerns on embodiment of this disclosure. 本開示の実施形態に係るウォブリングの一例の説明図である。It is explanatory drawing of an example of wobbling which concerns on embodiment of this disclosure. 本開示の実施形態に係るウォブリングの一例の説明図である。It is explanatory drawing of an example of wobbling which concerns on embodiment of this disclosure. 本開示の実施形態に係るウォブリングの一例の説明図である。It is explanatory drawing of an example of wobbling which concerns on embodiment of this disclosure. 本開示の実施形態に係るウォブリングの一例の説明図である。It is explanatory drawing of an example of wobbling which concerns on embodiment of this disclosure. 本開示の実施形態に係るウォブリングの一例の説明図である。It is explanatory drawing of an example of wobbling which concerns on embodiment of this disclosure. 本開示の実施形態に係る情報処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of information processing which concerns on embodiment of this disclosure. 本開示の変形例に係る顕微鏡システムの一例を示す模式図である。It is a schematic diagram which shows an example of the microscope system which concerns on the modification of this disclosure. 本開示の変形例に係る可変部材の一例を示す模式図である。It is a schematic diagram which shows an example of the variable member which concerns on the modification of this disclosure. 本開示の変形例に係る可変部材の一例を示す模式図である。It is a schematic diagram which shows an example of the variable member which concerns on the modification of this disclosure. 本開示の変形例に係る可変部材の一例を示す模式図である。It is a schematic diagram which shows an example of the variable member which concerns on the modification of this disclosure. 本開示の実施形態および変形例に係るハードウェア構成図である。It is a hardware block diagram which concerns on embodiment and modification of this disclosure.
 以下に、本開示の実施形態について図面に基づいて詳細に説明する。なお、以下の実施形態において、同一の部位には同一の符号を付与し、重複する説明を省略する。 Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. In the following embodiments, the same parts are designated by the same reference numerals, and duplicate description will be omitted.
 図1は、本実施形態の顕微鏡システム1の一例を示す模式図である。 FIG. 1 is a schematic view showing an example of the microscope system 1 of the present embodiment.
 顕微鏡システム1は、ライン照明LAを検体Tへ照射し、検体Tから発せられた光を受光するシステムである。ライン照明LAおよび検体Tの詳細は後述する。 The microscope system 1 is a system that irradiates the sample T with line illumination LA and receives the light emitted from the sample T. Details of the line illumination LA and the sample T will be described later.
 顕微鏡システム1は、撮像装置12を備える。撮像装置12は、例えば、ネットワークNなどの無線通信網または有線通信網を介してサーバ装置10に通信可能に接続されている。サーバ装置10は、コンピュータでもよい。 The microscope system 1 includes an imaging device 12. The image pickup device 12 is communicably connected to the server device 10 via, for example, a wireless communication network such as network N or a wired communication network. The server device 10 may be a computer.
 本実施形態では、後述する対物レンズ22と検体Tとが互いに近づく方向および離れる方向に沿った方向を、Z軸方向と称して説明する。また、Z軸方向は、検体Tの厚み方向に一致するものとして説明する。また、本実施形態では、Z軸方向と、対物レンズ22の光軸A2と、が平行である場合を想定して説明する。また、後述するステージ26は、Z軸方向に直交する2軸(X軸方向およびY軸方向)によって表される二次元平面であるものとする。ステージ26の二次元平面に平行な平面を、XY平面と称して説明する場合がある。これらの各部の詳細は後述する。 In the present embodiment, the direction in which the objective lens 22 and the sample T, which will be described later, are approaching each other and away from each other will be referred to as a Z-axis direction. Further, the Z-axis direction will be described as being coincident with the thickness direction of the sample T. Further, in the present embodiment, the case where the Z-axis direction and the optical axis A2 of the objective lens 22 are parallel will be described. Further, the stage 26 described later is assumed to be a two-dimensional plane represented by two axes (X-axis direction and Y-axis direction) orthogonal to the Z-axis direction. A plane parallel to the two-dimensional plane of the stage 26 may be referred to as an XY plane. Details of each of these parts will be described later.
 撮像装置12は、測定部14と、制御装置16と、を備える。測定部14と制御装置16とは、データまたは信号を授受可能に接続されている。 The imaging device 12 includes a measuring unit 14 and a control device 16. The measuring unit 14 and the control device 16 are connected so as to be able to exchange data or signals.
 測定部14は、測定対象領域24から発せられた光を測定する光学機構を有する。測定部14は、例えば、光学顕微鏡に適用される。 The measuring unit 14 has an optical mechanism for measuring the light emitted from the measurement target region 24. The measuring unit 14 is applied to, for example, an optical microscope.
 測定部14は、照射部18と、分割ミラー20と、対物レンズ22と、ステージ26と、ハーフミラー28と、撮像光学ユニット30と、フォーカス検出ユニット36と、第1駆動部44と、第2駆動部46と、を備える。 The measurement unit 14 includes an irradiation unit 18, a split mirror 20, an objective lens 22, a stage 26, a half mirror 28, an imaging optical unit 30, a focus detection unit 36, a first drive unit 44, and a second. A drive unit 46 is provided.
 照射部18は、第1方向に平行なライン照明LAを照射する。 The irradiation unit 18 irradiates the line illumination LA parallel to the first direction.
 ライン照明LAとは、第1方向に長いライン状の形状の光である。詳細には、ライン照明LAとは、光軸に直交する二次元平面における、光束の第1方向の長さが、該第1方向に直交する方向に対して、数倍(例えば、100倍以上)以上の長さの光である。本実施形態では、ライン照明LAの長手方向である第1方向が、図1中のX軸方向に一致する場合を一例として説明する。 Line illumination LA is light having a long line shape in the first direction. Specifically, the line illumination LA means that the length of the luminous flux in the first direction in the two-dimensional plane orthogonal to the optical axis is several times (for example, 100 times or more) with respect to the direction orthogonal to the first direction. ) It is a light of a length longer than that. In the present embodiment, a case where the first direction, which is the longitudinal direction of the line illumination LA, coincides with the X-axis direction in FIG. 1 will be described as an example.
 照射部18は、光源部18Aと、結像光学系18Dと、を備える。光源部18Aは、光源18Bと、コリメータレンズ18Cと、を含む。光源18Bは、ライン状の形状のライン照明LAを発光する光源である。例えば、光源18Bは、光路中に配置されたシリンドリカルレンズ(図示省略)を用い、1方向だけに集光することで、ライン照明LAを照射する。また、光源18Bは、X軸方向に長いスリットを介して光を照射することで、ライン照明LAを検体Tへ照射してもよい。 The irradiation unit 18 includes a light source unit 18A and an imaging optical system 18D. The light source unit 18A includes a light source 18B and a collimator lens 18C. The light source 18B is a light source that emits a line-shaped line illumination LA. For example, the light source 18B irradiates the line illumination LA by condensing light in only one direction using a cylindrical lens (not shown) arranged in the optical path. Further, the light source 18B may irradiate the sample T with the line illumination LA by irradiating the sample T with light through a slit long in the X-axis direction.
 光源18Bから照射されたライン照明LAは、コリメータレンズ18Cによって略平行光とされた後に、結像光学系18Dを介して分割ミラー20へ到る。 The line illumination LA emitted from the light source 18B reaches the split mirror 20 via the imaging optical system 18D after being made into substantially parallel light by the collimator lens 18C.
 なお、ライン状とは、光源18Bから照射されたライン照明LAが検体Tを照射する照明光の形状を示す。具体的には、ライン状とは、光源18Bから照射されたライン照明LAの、光軸A1に対する直交断面の形状を示す。光軸A1は、光源18Bから分割ミラー20までの光軸を示す。言い換えると、光軸A1は、コリメータレンズ18Cおよび結像光学系18Dの光軸である。 The line shape indicates the shape of the illumination light that the line illumination LA emitted from the light source 18B irradiates the sample T. Specifically, the line shape indicates the shape of the cross section of the line illumination LA irradiated from the light source 18B, which is orthogonal to the optical axis A1. The optical axis A1 indicates an optical axis from the light source 18B to the split mirror 20. In other words, the optical axis A1 is the optical axis of the collimator lens 18C and the imaging optical system 18D.
 光源18Bは、検体Tが蛍光を発する波長領域の光を選択的に照射する光源18Bであってもよい。また、照射部18に、該波長領域の光を選択的に透過するフィルタを設けた構成としてもよい。本実施形態では、光源18Bは、検体Tが蛍光を発する波長領域の、ライン照明LAを照射する形態を、一例として説明する。 The light source 18B may be a light source 18B that selectively irradiates light in a wavelength region in which the sample T fluoresces. Further, the irradiation unit 18 may be provided with a filter that selectively transmits light in the wavelength region. In the present embodiment, the mode in which the light source 18B irradiates the line illumination LA in the wavelength region where the sample T fluoresces will be described as an example.
 分割ミラー20は、ライン照明LAを反射し、ライン照明LA以外の波長領域の光を透過する。分割ミラー20は、測定対象に応じてハーフミラーもしくはダイクロイックミラーを選択する。本実施形態では、測定対象領域24から発せられた光を透過する。ライン照明LAは、分割ミラー20によって反射され、対物レンズ22へ到る。 The split mirror 20 reflects the line illumination LA and transmits light in a wavelength region other than the line illumination LA. The split mirror 20 selects a half mirror or a dichroic mirror according to the measurement target. In the present embodiment, the light emitted from the measurement target region 24 is transmitted. The line illumination LA is reflected by the split mirror 20 and reaches the objective lens 22.
 対物レンズ22は、測定対象領域24にライン照明LAを集光させるフォーカスレンズである。詳細には、対物レンズ22は、測定対象領域24へライン照明LAを集光させることで、測定対象領域24へライン照明LAを照射するためのレンズである。 The objective lens 22 is a focus lens that focuses the line illumination LA on the measurement target area 24. Specifically, the objective lens 22 is a lens for irradiating the measurement target area 24 with the line illumination LA by condensing the line illumination LA on the measurement target area 24.
 対物レンズ22には、第2駆動部46が設けられている。第2駆動部46は、対物レンズ22をZ軸方向に移動させる。Z軸方向は、第3方向に相当し、光軸方向に一致する。一方、測定対象領域24が載置されたステージ26には、第1駆動部44が設けられている。ステージ26は、検体Tを支持し、第1方向と垂直な第2方向(Y軸方向)に移動可能である。第1駆動部44は、Z軸方向に、ステージ26を移動させる。ステージ26の移動に伴い、ステージ26上に載置された測定対象領域24が、対物レンズ22へ近づく方向または離れる方向に移動する。対物レンズ22と測定対象領域24との間隔が調整されることで、対物レンズ22の焦点が調整される(詳細後述)。 The objective lens 22 is provided with a second drive unit 46. The second drive unit 46 moves the objective lens 22 in the Z-axis direction. The Z-axis direction corresponds to the third direction and coincides with the optical axis direction. On the other hand, the first drive unit 44 is provided on the stage 26 on which the measurement target region 24 is placed. The stage 26 supports the sample T and can move in the second direction (Y-axis direction) perpendicular to the first direction. The first drive unit 44 moves the stage 26 in the Z-axis direction. As the stage 26 moves, the measurement target area 24 placed on the stage 26 moves in the direction toward or away from the objective lens 22. By adjusting the distance between the objective lens 22 and the measurement target area 24, the focus of the objective lens 22 is adjusted (details will be described later).
 また、第1駆動部44は、ステージ26をY軸方向およびX軸方向の少なくとも一方に移動させる。ステージ26の移動に伴い、ステージ26上に載置された測定対象領域24が、対物レンズ22に対して、Y軸方向またはX軸方向に相対的に移動される。Y軸方向およびX軸方向は、Z軸方向に対して直交する方向である。Y軸方向およびX軸方向は、互いに直交する方向である。 Further, the first drive unit 44 moves the stage 26 in at least one of the Y-axis direction and the X-axis direction. With the movement of the stage 26, the measurement target area 24 placed on the stage 26 is moved relative to the objective lens 22 in the Y-axis direction or the X-axis direction. The Y-axis direction and the X-axis direction are directions orthogonal to the Z-axis direction. The Y-axis direction and the X-axis direction are directions orthogonal to each other.
 なお、測定部14は、第1駆動部44および第2駆動部46の少なくとも一方を備えた構成であればよく、これらの双方を備えた構成に限定されない。 The measuring unit 14 may be configured to include at least one of the first driving unit 44 and the second driving unit 46, and is not limited to the configuration including both of them.
 本実施形態では、Z軸方向は、測定対象領域24の厚み方向に一致するものとして説明する。すなわち、Z軸方向は、ステージ26上に支持された検体Tの厚み方向に一致するものとして説明する。また、本実施形態では、Z軸方向と、対物レンズ22の光軸A2と、が平行である場合を想定して説明する。また、ステージ26は、Z軸方向に直交する2軸(X軸方向およびY軸方向)によって表される二次元平面であるものとする。ステージ26の二次元平面に平行な平面を、XY平面と称して説明する場合がある。 In the present embodiment, the Z-axis direction will be described as being coincident with the thickness direction of the measurement target region 24. That is, the Z-axis direction will be described as being coincident with the thickness direction of the sample T supported on the stage 26. Further, in the present embodiment, the case where the Z-axis direction and the optical axis A2 of the objective lens 22 are parallel will be described. Further, it is assumed that the stage 26 is a two-dimensional plane represented by two axes (X-axis direction and Y-axis direction) orthogonal to the Z-axis direction. A plane parallel to the two-dimensional plane of the stage 26 may be referred to as an XY plane.
 なお、上記には、光源18Bから出射されるときのライン照明LAの長手方向が、X軸方向に一致すると説明した。しかし、該ライン照明LAの長手方向は、X軸方向と不一致であってもよい。例えば、光源18Bから出射されるときのライン照明LAの長手方向は、Y軸方向に一致する方向であってもよい。 It was explained above that the longitudinal direction of the line illumination LA when emitted from the light source 18B coincides with the X-axis direction. However, the longitudinal direction of the line illumination LA may not match the X-axis direction. For example, the longitudinal direction of the line illumination LA when emitted from the light source 18B may be a direction that coincides with the Y-axis direction.
 測定対象領域24は、検体Tを含む。例えば、測定対象領域24は、一対のガラス部材の間の領域であり、検体Tを含む領域である。ガラス部材は、例えば、スライドガラスである。ガラス部材は、カバーガラスと称される場合がある。なお、ガラス部材は、検体Tを載置可能な部材であればよく、ガラスによって構成された部材に限定されない。なお、ガラス部材は、ライン照明LAおよび検体Tから発せられる光を透過する部材であればよい。 The measurement target area 24 includes the sample T. For example, the measurement target region 24 is a region between a pair of glass members and is a region including a sample T. The glass member is, for example, a slide glass. The glass member is sometimes referred to as a cover glass. The glass member may be any member as long as the sample T can be placed on the glass member, and the glass member is not limited to the member made of glass. The glass member may be a member that transmits light emitted from the line illumination LA and the sample T.
 検体Tは、顕微鏡システム1における測定対象である。すなわち、検体Tは、顕微鏡システム1で撮像画像を得る対象の物である。本実施形態は、検体Tは、ライン照明LAの照射により蛍光を発する形態を一例として説明する。検体Tは、例えば、微生物、細胞、リポソーム、血液中の赤血球、白血球、血小板、血管内皮細胞、上皮組織の微小細胞片、および、各種臓器の病理組織切片、等である。なお、検体Tは、ライン照明LAの照射により蛍光を発する蛍光色素によって標識された、細胞などの物体であってもよい。また、検体Tは、ライン照明LAの照射により蛍光を発する物に限定されない。例えば、検体Tは、ライン照明LAの照射により、蛍光以外の波長領域の光を発する物であってもよいし、照明光を散乱、反射するものであってもよい。 Specimen T is a measurement target in the microscope system 1. That is, the sample T is an object for which an image captured by the microscope system 1 is obtained. In this embodiment, the sample T will be described as an example in which fluorescence is emitted by irradiation with line illumination LA. Specimen T is, for example, microorganisms, cells, liposomes, erythrocytes, leukocytes, platelets, vascular endothelial cells in blood, microcell fragments of epithelial tissue, and histopathological tissue sections of various organs. The sample T may be an object such as a cell labeled with a fluorescent dye that fluoresces when irradiated with line illumination LA. Further, the sample T is not limited to a substance that fluoresces when irradiated with line illumination LA. For example, the sample T may be one that emits light in a wavelength region other than fluorescence by irradiation with line illumination LA, or may be one that scatters and reflects the illumination light.
 なお、以下では、ライン照明LAの照射により検体Tが発する蛍光を、単に、光、と称して説明する場合がある。 In the following, the fluorescence emitted by the sample T due to the irradiation of the line illumination LA may be simply referred to as light.
 測定対象領域24には、封入材によって封入された状態の検体Tが載置されていてもよい。封入材には、測定対象領域24に入射するライン照明LAおよび検体Tの発する光、の各々を透過する公知の材料を用いればよい。封入材は、液体、および固体の何れであってもよい。また、測定対象領域24は、ガラス部材上に検体Tを載置した構成であってもよく、一対のガラス部材間に検体Tを載置した構成に限定されない。また、測定対象領域24には、封入材によって封入された状態で検体Tが載置されていてもよい。 The sample T in a state of being enclosed by the encapsulant may be placed in the measurement target area 24. As the encapsulating material, a known material that transmits each of the line illumination LA incident on the measurement target region 24 and the light emitted by the sample T may be used. The encapsulant may be either liquid or solid. Further, the measurement target region 24 may have a configuration in which the sample T is placed on the glass member, and is not limited to the configuration in which the sample T is placed between the pair of glass members. Further, the sample T may be placed in the measurement target area 24 in a state of being sealed by the sealing material.
 本実施形態では、測定対象領域24内には、封入材によって封入された状態で検体Tが載置されている形態を一例として説明する。 In the present embodiment, a mode in which the sample T is placed in the measurement target area 24 in a state of being enclosed by an encapsulant will be described as an example.
 ライン照明LAの照射により検体Tから発せられた光は、対物レンズ22および分割ミラー20、本実施形態ではダイクロイックミラーを透過してハーフミラー28へ到る。検体Tから発せられた光とは、例えば、ライン照明LAの照射により検体Tが発した蛍光である。蛍光には、散乱した蛍光成分が含まれる。ハーフミラー28は、検体Tから発せられた光の一部を撮像光学ユニット30へ振り分け、残りをフォーカス検出ユニット36へ振り分ける。なお、ハーフミラー28による撮像光学ユニット30およびフォーカス検出ユニット36への光の分配率は、同じ割合(例えば、50%、50%)であってもよいし、異なる割合でもよい。このため、ハーフミラー28に代えて、ダイクロイックミラーや偏光ミラーを用いてもよい。 The light emitted from the sample T by the line illumination LA passes through the objective lens 22 and the split mirror 20, and in the present embodiment, the dichroic mirror and reaches the half mirror 28. The light emitted from the sample T is, for example, the fluorescence emitted by the sample T by irradiation with the line illumination LA. Fluorescence includes scattered fluorescent components. The half mirror 28 distributes a part of the light emitted from the sample T to the imaging optical unit 30, and distributes the rest to the focus detection unit 36. The distribution ratio of light to the imaging optical unit 30 and the focus detection unit 36 by the half mirror 28 may be the same ratio (for example, 50% or 50%) or may be different ratios. Therefore, instead of the half mirror 28, a dichroic mirror or a polarizing mirror may be used.
 ハーフミラー28を透過した光は、撮像光学ユニット30へ到る。ハーフミラー28で反射された光は、フォーカス検出ユニット36へ到る。 The light transmitted through the half mirror 28 reaches the imaging optical unit 30. The light reflected by the half mirror 28 reaches the focus detection unit 36.
 なお、照射部18で作られるライン照明LAと測定対象領域24とは、光学的に共役関係にあるものとする。また、ライン照明LAと、測定対象領域24と、撮像光学ユニット30の撮像部34と、フォーカス検出ユニット36の瞳分割像撮像部42とは、光学的に共役関係にあるものとする。 It is assumed that the line illumination LA created by the irradiation unit 18 and the measurement target area 24 are optically conjugated. Further, it is assumed that the line illumination LA, the measurement target area 24, the image pickup unit 34 of the image pickup optical unit 30, and the pupil division image image pickup unit 42 of the focus detection unit 36 are optically conjugated.
 撮像光学ユニット30は、結像レンズ32と、撮像部34と、を備える。ハーフミラー28を透過した光は、結像レンズ32によって撮像部34へ集光される。撮像部34は、検体Tから発せられた光を受光し、受光した光の撮像画像を制御装置16へ出力する。撮像画像は、検体Tの種類などの解析に用いられる。 The imaging optical unit 30 includes an imaging lens 32 and an imaging unit 34. The light transmitted through the half mirror 28 is focused on the imaging unit 34 by the imaging lens 32. The imaging unit 34 receives the light emitted from the sample T and outputs the captured image of the received light to the control device 16. The captured image is used for analysis of the type of sample T and the like.
 撮像部34は、例えば、CMOS(Complementary Metal-Oxide Semiconductor)イメージセンサ、CCD(Charge Coupled Device)イメージセンサなどであってよい。撮像部34は、例えば、複数の受光部33を含んでいてもよい。その場合、各受光部33は、例えば、フォトダイオードが二次元配列又は一次元配列された構成を備えていてもよい。 The imaging unit 34 may be, for example, a CMOS (Complementary Metal-Oxide Semiconductor) image sensor, a CCD (Charge Coupled Device) image sensor, or the like. The imaging unit 34 may include, for example, a plurality of light receiving units 33. In that case, each light receiving unit 33 may have, for example, a configuration in which photodiodes are arranged two-dimensionally or one-dimensionally.
 一方、フォーカス検出ユニット36は、ライン照明LAを照射された検体Tから発せられた光の像の位相差を得るための、光学ユニットである。本実施形態では、フォーカス検出ユニット36が、二個のセパレータレンズを用いて、瞳を二つに分割した像の位相差を得るための光学ユニットである場合を、一例として説明する。 On the other hand, the focus detection unit 36 is an optical unit for obtaining the phase difference of the image of the light emitted from the sample T irradiated with the line illumination LA. In the present embodiment, the case where the focus detection unit 36 is an optical unit for obtaining the phase difference of an image in which the pupil is divided into two by using two separator lenses will be described as an example.
 フォーカス検出ユニット36は、フィールドレンズ38と、絞りマスク39と、セパレータレンズ40Aおよびセパレータレンズ40Bからなるセパレータレンズ40と、瞳分割像撮像部42と、を有する。セパレータレンズ40は、セパレータレンズ40Aとセパレータレンズ40Bとを含む。 The focus detection unit 36 includes a field lens 38, an aperture mask 39, a separator lens 40 including a separator lens 40A and a separator lens 40B, and a pupil split image imaging unit 42. The separator lens 40 includes a separator lens 40A and a separator lens 40B.
 ライン照明LAの照射により検体Tから発せられた光は、フィールドレンズ38を介して絞りマスク39に到る。絞りマスク39は、フィールドレンズ38の光軸を境界として対象となる位置に、一対の開口39A,39Bを有する。これらの一対の開口39A,39Bの大きさは、対物レンズ22の被写体深度よりセパレータレンズ40Aおよびセパレータレンズ40Bの被写体深度が広くなるように調整されている。 The light emitted from the sample T by the irradiation of the line illumination LA reaches the aperture mask 39 via the field lens 38. The aperture mask 39 has a pair of openings 39A and 39B at a target position with the optical axis of the field lens 38 as a boundary. The size of these pair of openings 39A and 39B is adjusted so that the depth of field of the separator lens 40A and the separator lens 40B is wider than the depth of field of the objective lens 22.
 絞りマスク39は、フィールドレンズ38から入射した光を、一対の開口39A,39Bによって2つの光束に分割する。セパレータレンズ40Aおよびセパレータレンズ40Bは、それぞれ、絞りマスク39の開口39A,39Bの各々を透過した光束を、瞳分割像撮像部42へ集光させる。このため、瞳分割像撮像部42は、分割された2つの光束を受光する。 The diaphragm mask 39 divides the light incident from the field lens 38 into two luminous fluxes by a pair of openings 39A and 39B. The separator lens 40A and the separator lens 40B collect the light flux transmitted through each of the openings 39A and 39B of the aperture mask 39 on the pupil split image imaging unit 42, respectively. Therefore, the pupil-divided image imaging unit 42 receives the two divided light fluxes.
 なお、フォーカス検出ユニット36は、絞りマスク39を備えない構成であってもよい。この場合、フィールドレンズ38を介してセパレータレンズ40に到達した光は、セパレータレンズ40Aおよびセパレータレンズ40Bによって2つの光束に分割され、瞳分割像撮像部42へ集光される。 The focus detection unit 36 may not be provided with the aperture mask 39. In this case, the light that reaches the separator lens 40 via the field lens 38 is divided into two light fluxes by the separator lens 40A and the separator lens 40B, and is focused on the pupil-divided image imaging unit 42.
 瞳分割像撮像部42は、複数の受光部41を備える。例えば、瞳分割像撮像部42は、複数の受光部41を受光面に沿って二次元配列した構成である。受光部41の受光面は、フィールドレンズ38、絞りマスク39、およびセパレータレンズ40を介して瞳分割像撮像部42に入射する光の光軸に対して直交する二次元平面である。瞳分割像撮像部42は、例えば、CMOSイメージセンサ、または、CCDイメージセンサ、などである。 The pupil split image imaging unit 42 includes a plurality of light receiving units 41. For example, the pupil division image imaging unit 42 has a configuration in which a plurality of light receiving units 41 are two-dimensionally arranged along the light receiving surface. The light receiving surface of the light receiving unit 41 is a two-dimensional plane orthogonal to the optical axis of the light incident on the pupil split image imaging unit 42 via the field lens 38, the aperture mask 39, and the separator lens 40. The pupil division image imaging unit 42 is, for example, a CMOS image sensor, a CCD image sensor, or the like.
 上述したように、瞳分割像撮像部42は、2つの瞳(セパレータレンズ40A,セパレータレンズ40B)により分割された2つの光束を受光する。瞳分割像撮像部42は、2つの光束を受光することで、1組の光束の像からなる画像を撮像し、その画像を制御装置16へ出力する。分割した瞳(セパレータレンズ40A)と瞳(セパレータレンズ40B)のそれぞれを通って形成される1組の像間には光路が異なるために位相差が存在する。以下では、この1組の像を、瞳分割像と称する。 As described above, the pupil-divided image imaging unit 42 receives two light fluxes divided by the two pupils (separator lens 40A and separator lens 40B). By receiving two light fluxes, the pupil division image imaging unit 42 captures an image composed of a set of light flux images and outputs the image to the control device 16. There is a phase difference between the pair of images formed through the divided pupil (separator lens 40A) and the pupil (separator lens 40B) because the optical paths are different. Hereinafter, this set of images will be referred to as a pupil split image.
 図2は、瞳分割像撮像部42で取得した瞳分割画像70の一例を示す模式図である。瞳分割画像70は、一組の像72Aおよび像72Bである瞳分割像72を含む。 FIG. 2 is a schematic view showing an example of the pupil division image 70 acquired by the pupil division image imaging unit 42. The pupil-split image 70 includes a set of image 72A and a pupil-split image 72, which is an image 72B.
 瞳分割画像70は、瞳分割像撮像部42に設けられた複数の受光部41の各々で受光した光の位置と明るさに対応した画像である。以下では、受光部が受光した光の明るさを、光強度値と称して説明する場合がある。 The pupil-divided image 70 is an image corresponding to the position and brightness of the light received by each of the plurality of light-receiving units 41 provided in the pupil-divided image imaging unit 42. Hereinafter, the brightness of the light received by the light receiving unit may be described as a light intensity value.
 なお、受光部41が、1または複数の画素ごとに設けられていると想定する。この場合、瞳分割画像70は、複数の受光部41の各々に対応する画素ごとに光強度値を規定した画像である。この場合、光強度値は、画素値に相当する。 It is assumed that the light receiving unit 41 is provided for each one or a plurality of pixels. In this case, the pupil division image 70 is an image in which the light intensity value is defined for each pixel corresponding to each of the plurality of light receiving units 41. In this case, the light intensity value corresponds to the pixel value.
 瞳分割画像70に含まれる像72Aおよび像72Bは、光の受光領域であり、他の領域に比べて光強度値の大きい領域である。また、上述したように、照射部18は、ライン照明LAを検体Tへ照射する。このため、ライン照明LAを照射された検体Tから発せられた光は、ライン状の光となる。よって、瞳分割像72を構成する像72Aおよび像72Bは、所定方向に長いライン状の像となる。この所定方向は、ライン照明LAの長手方向であるX軸方向に光学的に対応する方向である。 The image 72A and the image 72B included in the pupil division image 70 are light receiving regions, and are regions having a larger light intensity value than other regions. Further, as described above, the irradiation unit 18 irradiates the sample T with the line illumination LA. Therefore, the light emitted from the sample T irradiated with the line illumination LA becomes line-shaped light. Therefore, the images 72A and 72B constituting the pupil division image 72 are long line-shaped images in a predetermined direction. This predetermined direction is a direction that optically corresponds to the X-axis direction, which is the longitudinal direction of the line illumination LA.
 詳細には、図2に示す瞳分割画像70の縦軸方向(YA軸方向)は、測定対象領域24におけるY軸方向に光学的に対応する。また、図2に示す瞳分割画像70の横軸方向(XA軸方向)は、測定対象領域24におけるX軸方向に光学的に対応する。X軸方向は、上述したように、ライン照明LAの長手方向である。また、図2に示す瞳分割画像70の奥行方向(ZA軸方向)は、測定対象領域24の厚み方向であるZ軸方向に光学的に対応する。フォーカス検出ユニット36は、瞳分割画像70を制御装置16へ出力する。 Specifically, the vertical axis direction (YA axis direction) of the pupil split image 70 shown in FIG. 2 optically corresponds to the Y axis direction in the measurement target region 24. Further, the horizontal axis direction (XA axis direction) of the pupil division image 70 shown in FIG. 2 optically corresponds to the X axis direction in the measurement target region 24. As described above, the X-axis direction is the longitudinal direction of the line illumination LA. Further, the depth direction (ZA-axis direction) of the pupil-divided image 70 shown in FIG. 2 optically corresponds to the Z-axis direction, which is the thickness direction of the measurement target region 24. The focus detection unit 36 outputs the pupil division image 70 to the control device 16.
 なお、フォーカス検出ユニット36は、位相差を持つ瞳分割像72(像72A、像72B)の移動を得るための光学ユニットであればよく、位相差を持つ瞳分割像72(像72A、像72B)は、2眼瞳分割像に限定されない。フォーカス検出ユニット36は、例えば、検体Tから発せられた光を3つ以上の光束に分割して受光する、3眼以上の瞳分割像を得る光学ユニットであってもよい。 The focus detection unit 36 may be an optical unit for obtaining the movement of the pupil division image 72 (image 72A, image 72B) having a phase difference, and the pupil division image 72 (image 72A, image 72B) having a phase difference may be used. ) Is not limited to the bicular pupil split image. The focus detection unit 36 may be, for example, an optical unit that obtains a pupil division image of three or more eyes that divides the light emitted from the sample T into three or more light fluxes and receives the light.
 図1に戻り説明を続ける。なお、本実施形態では、測定部14は、第1駆動部44によって検体Tを載置したステージ26を駆動し、Y軸方向にライン照明LAに対して測定対象領域24を相対移動しながら、ライン照明LAを検体Tへ照射する。すなわち、本実施形態では、Y軸方向は、測定対象領域24の走査方向である。ライン照明LAの走査方法は、限定されない。走査方法は、ライン照明LAの長手方向(X軸方向)に直交する方向(Y軸方向)に沿って走査する方法、測定部14における測定対象領域24以外の構成の少なくとも一部を測定対象領域24に対してY軸方向へ移動させる方法、などがある。また、分割ミラー20と対物レンズ22との間に偏向ミラーを配置し、偏向ミラーによってライン照明LAを走査してもよい。測定対象領域24をY軸方向に走査しながら撮像部34による撮像を実行することで、検体Tの撮像画像が得られる。 Return to Fig. 1 and continue the explanation. In the present embodiment, the measuring unit 14 drives the stage 26 on which the sample T is placed by the first driving unit 44, and moves the measurement target region 24 relative to the line illumination LA in the Y-axis direction while moving the measurement target region 24 relative to the line illumination LA. Specimen T is irradiated with line illumination LA. That is, in the present embodiment, the Y-axis direction is the scanning direction of the measurement target region 24. The scanning method of the line illumination LA is not limited. The scanning method is a method of scanning along a direction (Y-axis direction) orthogonal to the longitudinal direction (X-axis direction) of the line illumination LA, and at least a part of the configuration other than the measurement target area 24 in the measurement unit 14 is a measurement target area. There is a method of moving the 24 in the Y-axis direction and the like. Further, a deflection mirror may be arranged between the split mirror 20 and the objective lens 22, and the line illumination LA may be scanned by the deflection mirror. By performing imaging by the imaging unit 34 while scanning the measurement target region 24 in the Y-axis direction, an captured image of the sample T can be obtained.
 次に、制御装置16について説明する。 Next, the control device 16 will be described.
 制御装置16は、情報処理装置の一例である。制御装置16は、位相差に基づいて、振動の振幅および撮像部34による撮像条件の少なくとも一方の制御を行う(詳細後述)。 The control device 16 is an example of an information processing device. The control device 16 controls at least one of the vibration amplitude and the imaging condition by the imaging unit 34 based on the phase difference (details will be described later).
 振動とは、測定部14における、対物レンズ22を含む光学系またはステージ26の振動を意味する。振動とは、詳細には、光学系またはステージ26の振動によるウォブリングを意味する。 The vibration means the vibration of the optical system including the objective lens 22 or the stage 26 in the measuring unit 14. The vibration means, in detail, wobbling due to the vibration of the optical system or the stage 26.
 ウォブリングとは、対物レンズ22の焦点を光軸A2方向(第3方向)に沿って移動させることを意味する。本実施形態では、制御装置16は、ウォブリングしながら撮像を実行することで、撮像画像を取得する。 Wobbling means moving the focal point of the objective lens 22 along the optical axis A2 direction (third direction). In the present embodiment, the control device 16 acquires an captured image by performing imaging while wobbling.
 制御装置16と、光源18B、撮像部34、フォーカス検出ユニット36、第1駆動部44、および第2駆動部46の各々とは、データまたは信号を授受可能に接続されている。 The control device 16 and each of the light source 18B, the image pickup unit 34, the focus detection unit 36, the first drive unit 44, and the second drive unit 46 are connected so as to be able to exchange data or signals.
 図3は、制御装置16の機能的構成の一例を示す図である。なお、図3には、説明のために、光源18B、瞳分割像撮像部42、撮像部34、第1駆動部44、および第2駆動部46も図示した。 FIG. 3 is a diagram showing an example of the functional configuration of the control device 16. Note that FIG. 3 also shows a light source 18B, a pupil split image imaging unit 42, an imaging unit 34, a first drive unit 44, and a second drive unit 46 for the sake of explanation.
 制御装置16は、制御部60と、記憶部62と、通信部64と、を備える。制御部60と、記憶部62および通信部64とは、データまたは信号を授受可能に接続されている。記憶部62は、各種のデータを記憶する記憶媒体である。記憶部62は、例えば、ハードディスクドライブまたは外部メモリなどである。通信部64は、ネットワークNなどを介してサーバ装置10などの外部装置と通信する。 The control device 16 includes a control unit 60, a storage unit 62, and a communication unit 64. The control unit 60, the storage unit 62, and the communication unit 64 are connected to each other so that data or signals can be exchanged. The storage unit 62 is a storage medium for storing various types of data. The storage unit 62 is, for example, a hard disk drive or an external memory. The communication unit 64 communicates with an external device such as the server device 10 via the network N or the like.
 制御部60は、光源制御部60Aと、位相差取得部60Bと、算出部60Cと、焦点位置制御部60Gと、撮像画像取得部60Hと、出力制御部60Iと、を備える。算出部60Cは、第1算出部60Dと、第2算出部60Eと、を含む。 The control unit 60 includes a light source control unit 60A, a phase difference acquisition unit 60B, a calculation unit 60C, a focus position control unit 60G, an image capture image acquisition unit 60H, and an output control unit 60I. The calculation unit 60C includes a first calculation unit 60D and a second calculation unit 60E.
 光源制御部60A、位相差取得部60B、算出部60C、第1算出部60D、第2算出部60E、焦点位置制御部60G、撮像画像取得部60H、および出力制御部60Iの一部またはすべては、例えば、CPU(Central Processing Unit)などの処理装置にプログラムを実行させること、すなわち、ソフトウェアにより実現してもよいし、IC(Integrated Circuit)などのハードウェアにより実現してもよいし、ソフトウェアおよびハードウェアを併用して実現してもよい。 Part or all of the light source control unit 60A, phase difference acquisition unit 60B, calculation unit 60C, first calculation unit 60D, second calculation unit 60E, focus position control unit 60G, image capture image acquisition unit 60H, and output control unit 60I. For example, the program may be executed by a processing device such as a CPU (Central Processing Unit), that is, it may be realized by software, it may be realized by hardware such as an IC (Integrated Circuit), or it may be realized by software and It may be realized by using hardware together.
 光源制御部60Aは、ライン照明LAを照射するように光源18Bを制御する。光源制御部60Aの制御によって光源18Bからライン照明LAが照射される。 The light source control unit 60A controls the light source 18B so as to irradiate the line illumination LA. The line illumination LA is irradiated from the light source 18B under the control of the light source control unit 60A.
 位相差取得部60Bは、ライン照明LAを照射された検体Tから発せられた光の像(像72A、像72B)の位相差を取得する。詳細には、位相差取得部60Bは、瞳分割像撮像部42から瞳分割画像70を取得することで、瞳分割画像70に含まれる光の像(像72Aと像72B)の瞳分割像72を、位相差として取得する。 The phase difference acquisition unit 60B acquires the phase difference of the light images (image 72A, image 72B) emitted from the sample T irradiated with the line illumination LA. Specifically, the phase difference acquisition unit 60B acquires the pupil division image 70 from the pupil division image imaging unit 42, so that the pupil division image 72 of the light images ( images 72A and 72B) included in the pupil division image 70 is obtained. Is acquired as the phase difference.
 焦点位置制御部60Gは、測定部14に含まれる光学系の少なくとも1つを駆動させる。焦点位置制御部60Gは、測定部14に設けられた光学系の各々を駆動する駆動部の少なくとも1つを駆動制御することで、該光学系の少なくとも1つを駆動させる。 The focus position control unit 60G drives at least one of the optical systems included in the measurement unit 14. The focal position control unit 60G drives at least one of the optical systems provided in the measuring unit 14 by driving and controlling at least one of the driving units for driving each of the optical systems.
 詳細には、焦点位置制御部60Gは、対物レンズ22を含む光学系またはステージ26を振動(すなわちウォブリング)させて、Z軸方向の対物レンズ22の焦点を移動させる。本実施形態では、焦点位置制御部60Gは、第2駆動部46を制御することで、Z軸方向の対物レンズ22の焦点を移動させる形態を、一例として説明する。 Specifically, the focal position control unit 60G vibrates (that is, wobbling) the optical system including the objective lens 22 or the stage 26 to move the focal point of the objective lens 22 in the Z-axis direction. In the present embodiment, a mode in which the focal position control unit 60G moves the focus of the objective lens 22 in the Z-axis direction by controlling the second drive unit 46 will be described as an example.
 また、焦点位置制御部60Gは、光学系およびステージ26の少なくとも一方を移動させて、Y軸方向またはX軸方向に対物レンズ22の焦点を移動させる。Y軸方向またはX軸方向への焦点の移動によって、撮像部34による撮像範囲が、Y軸方向またはX軸方向に移動される。本実施形態では、焦点位置制御部60Gは、第1駆動部44を制御することで、Y軸方向またはX軸方向へステージ26を移動させる。この制御によって、対物レンズ22の焦点をY軸方向またはX軸方向へ移動させる形態を一例として説明する。 Further, the focal position control unit 60G moves at least one of the optical system and the stage 26 to move the focal point of the objective lens 22 in the Y-axis direction or the X-axis direction. By moving the focal point in the Y-axis direction or the X-axis direction, the imaging range by the imaging unit 34 is moved in the Y-axis direction or the X-axis direction. In the present embodiment, the focal position control unit 60G moves the stage 26 in the Y-axis direction or the X-axis direction by controlling the first drive unit 44. A mode in which the focal point of the objective lens 22 is moved in the Y-axis direction or the X-axis direction by this control will be described as an example.
 ここで、検体Tが、光軸A2に直交する直交平面であるXY平面に対して傾斜して配置されている場合がある。 Here, the sample T may be arranged at an angle with respect to the XY plane, which is an orthogonal plane orthogonal to the optical axis A2.
 図4は、検体TがXY平面に対して傾斜して配置されている場合の一例を示す模式図である。図4に示すように、検体Tが、測定対象領域24内において、XY平面に対して傾斜して配置されている場合がある。また、検体Tの厚みが、一定ではない場合がある。このような場合、検体TのX軸方向およびY軸方向によって特定される各位置と対物レンズ22との光軸A2方向の距離は、位置ごとに同じ距離ではなく、異なる距離が含まれる場合がある。X軸方向およびY軸方向によって特定される位置とは、X軸方向およびY軸方向の二次元座標によって特定される、XY平面上の位置である。 FIG. 4 is a schematic view showing an example of a case where the sample T is arranged at an angle with respect to the XY plane. As shown in FIG. 4, the sample T may be arranged at an angle with respect to the XY plane in the measurement target region 24. In addition, the thickness of the sample T may not be constant. In such a case, the distance between each position specified by the X-axis direction and the Y-axis direction of the sample T and the objective lens 22 in the optical axis A2 direction may not be the same for each position but may include different distances. be. The position specified by the X-axis direction and the Y-axis direction is a position on the XY plane specified by the two-dimensional coordinates in the X-axis direction and the Y-axis direction.
 図3に戻り説明を続ける。そこで、本実施形態の制御部60は、算出部60Cを備える。 Return to Fig. 3 and continue the explanation. Therefore, the control unit 60 of the present embodiment includes a calculation unit 60C.
 算出部60Cは、位相差取得部60Bで取得した位相差に基づいて、光学系またはステージ26の振動の振幅、および、撮像部34による撮像条件、の少なくとも一方を算出する。撮像部34による撮像条件は、Z軸方向のサンプリング数を含む。 The calculation unit 60C calculates at least one of the amplitude of the vibration of the optical system or the stage 26 and the image pickup condition by the image pickup unit 34 based on the phase difference acquired by the phase difference acquisition unit 60B. The imaging condition by the imaging unit 34 includes the number of samplings in the Z-axis direction.
 Z軸方向のサンプリング数は、ウォブリングである振動の1周期あたりの撮像枚数を意味する。言い換えると、サンプリング数は、測定対象領域24を測定対象領域24に含まれる検体Tの厚み方向(Z軸方向)に分割して撮像するときの、分割数を意味する。Z軸方向のサンプリング数は、Zスタック数と称される場合がある。本実施形態では、サンプリング数を、Zスタック数と称して説明する場合がある。 The number of samplings in the Z-axis direction means the number of images taken per cycle of vibration, which is wobbling. In other words, the number of samples means the number of divisions when the measurement target area 24 is divided in the thickness direction (Z-axis direction) of the sample T included in the measurement target area 24 and imaged. The number of samplings in the Z-axis direction may be referred to as the number of Z stacks. In the present embodiment, the number of samples may be referred to as the number of Z stacks.
 振動の振幅とは、ウォブリングの振幅を意味する。詳細には、振動の振幅は、Zスタック間隔に相当する。本実施形態では、振幅を、Zスタック間隔と称して説明する場合がある。 Vibration amplitude means wobbling amplitude. Specifically, the amplitude of vibration corresponds to the Z stack spacing. In this embodiment, the amplitude may be referred to as a Z stack interval.
 なお、撮像条件は、第1方向であるX軸方向または第2方向であるY軸方向への焦点の移動量を含んでいてもよい。この移動量は、撮像部34による撮像範囲の、Y軸方向またはX軸方向への1回あたりの移動量に相当する。 The imaging condition may include the amount of movement of the focal point in the X-axis direction which is the first direction or the Y-axis direction which is the second direction. This amount of movement corresponds to the amount of movement of the imaging range by the imaging unit 34 in the Y-axis direction or the X-axis direction each time.
 本実施形態では、算出部60Cは、位相差取得部60Bで取得した位相差に基づいてライン重心分布を算出し、振動の振幅であるZスタック間隔、および、撮像部34による撮像条件であるZスタック数、の少なくとも一方を算出する。 In the present embodiment, the calculation unit 60C calculates the line center of gravity distribution based on the phase difference acquired by the phase difference acquisition unit 60B, the Z stack interval which is the amplitude of vibration, and the Z which is the imaging condition by the imaging unit 34. Calculate at least one of the number of stacks.
 ライン重心分布とは、瞳分割像72を構成する像72Aおよび像72Bの各々の、光強度分布の重心の、ライン方向の分布を意味する。 The line center of gravity distribution means the distribution of the center of gravity of the light intensity distribution in the line direction of each of the images 72A and 72B constituting the pupil division image 72.
 図5は、ライン重心分布gの説明図である。図5には、像72Aおよび像72Bの内、像72Aを一例として示した。図5に示すように、ライン重心分布gは、XA軸方向に長いライン状の光である像72Aにおける、YA軸方向の光の分布の重心の、ライン方向の分布である。上述したように、像72AはXA軸方向に長いライン状である。このため、瞳分割画像70において、ライン重心分布gは、像72Aの長手方向であるXA軸方向に沿った、ラインによって表される。 FIG. 5 is an explanatory diagram of the line center of gravity distribution g. In FIG. 5, of the images 72A and 72B, the image 72A is shown as an example. As shown in FIG. 5, the line center of gravity distribution g is the distribution in the line direction of the center of gravity of the light distribution in the YA axis direction in the image 72A, which is a line-shaped light long in the XA axis direction. As described above, the image 72A has a long line shape in the XA axis direction. Therefore, in the pupil division image 70, the line center of gravity distribution g is represented by a line along the XA axis direction, which is the longitudinal direction of the image 72A.
 算出部60Cは、瞳分割像72を構成する像72Aおよび像72Bの何れか一方のライン重心分布gを特定すればよい。算出部60Cは、像72Aおよび像72Bの何れを用いるかを予め設定し、予め設定した像72Aまたは像72Bのライン重心分布gを特定すればよい。 The calculation unit 60C may specify the line center of gravity distribution g of either one of the image 72A and the image 72B constituting the pupil division image 72. The calculation unit 60C may preset which of the image 72A and the image 72B is to be used, and specify the line center of gravity distribution g of the preset image 72A or 72B.
 図6は、瞳分割画像70の一例を示す模式図である。検体TがXY平面に対して傾斜して配置されている場合を想定する。この場合、XY平面に対して傾斜した検体T(図4参照)の瞳分割像72を構成する像72Aおよび像72Bの各々のライン重心分布g(ライン重心分布ga,ライン重心分布gb)に対する光強度分布は、図6に示すように、XA軸方向に沿ってYA軸方向に異なる分布を示すものとなる。 FIG. 6 is a schematic view showing an example of the pupil split image 70. It is assumed that the sample T is arranged at an angle with respect to the XY plane. In this case, the light with respect to the line center of gravity distribution g (line center of gravity distribution ga, line center of gravity distribution gb) of each of the images 72A and 72B constituting the pupil division image 72 of the sample T (see FIG. 4) inclined with respect to the XY plane. As shown in FIG. 6, the intensity distribution shows different distributions in the YA axis direction along the XA axis direction.
 そこで、算出部60Cの第1算出部60Dは、像72Aまたは像72Bのライン重心分布に基づいて、Zスタック数を算出する。本実施形態では、第1算出部60Dが、像72Aのライン重心分布gであるライン重心分布gaに対する光強度分布に基づいて、Zスタック数を算出する形態を一例として説明する。なお、第1算出部60Dは、像72Bのライン重心分布gであるライン重心分布gbに対する光強度分布に基づいて、Zスタック数を算出してもよい。 Therefore, the first calculation unit 60D of the calculation unit 60C calculates the number of Z stacks based on the line center of gravity distribution of the image 72A or the image 72B. In the present embodiment, a mode in which the first calculation unit 60D calculates the number of Z stacks based on the light intensity distribution with respect to the line center of gravity distribution ga, which is the line center of gravity distribution g of the image 72A, will be described as an example. The first calculation unit 60D may calculate the number of Z stacks based on the light intensity distribution with respect to the line center of gravity distribution gb, which is the line center of gravity distribution g of the image 72B.
 例えば、第1算出部60Dは、像72Aの光強度分布における、ライン重心分布gによって分割され且つ該ライン重心分布gからの距離が互いに異なるピークを有する領域の数に、1を加算した数を、Zスタック数として算出する。 For example, the first calculation unit 60D adds 1 to the number of regions in the light intensity distribution of the image 72A that are divided by the line center of gravity distribution g and have peaks that are different from each other in distance from the line center of gravity distribution g. , Calculated as the number of Z stacks.
 図7A~図7Cは、Zスタック数の算出の説明図である。図7A~図7Cには、瞳分割画像70C~瞳分割画像70Eをそれぞれ示した。瞳分割画像70C~瞳分割画像70Eは、瞳分割画像70の一例である。 7A to 7C are explanatory views for calculating the number of Z stacks. 7A to 7C show the pupil division image 70C to the pupil division image 70E, respectively. The pupil-divided image 70C to the pupil-divided image 70E are examples of the pupil-divided image 70.
 図7Aに示すように、例えば、瞳分割像72を構成する像72Aが、像72Aのライン重心分布gに対して1つの交点Iで交差する一次直線によって表される場合を想定する。この場合、像72Aの光強度分布は、ライン重心分布gによって2つの領域EAと領域EBに分割される。光強度分布におけるピークPとは、光強度分布のXA軸方向の各位置における、ライン重心分布gからの距離がYA軸方向に上昇から下降または下降から上昇へ転ずるポイントを意味する。なお、図7Aに示すように、ライン重心分布gからの距離が上昇から下降または下降から上昇へ転ずるポイントが無い領域(領域EA、領域EB)については、像72Aと瞳分割画像70Cの端部との交点を、ピークP(ピークP1、ピークP2)とする。 As shown in FIG. 7A, for example, it is assumed that the image 72A constituting the pupil division image 72 is represented by a linear straight line intersecting the line center of gravity distribution g of the image 72A at one intersection I. In this case, the light intensity distribution of the image 72A is divided into two regions EA and EB by the line center of gravity distribution g. The peak P in the light intensity distribution means a point at which the distance from the line center of gravity distribution g at each position of the light intensity distribution in the XA axis direction changes from rising to falling or falling to rising in the YA axis direction. As shown in FIG. 7A, for the regions (regions EA and EB) where the distance from the line center of gravity distribution g does not change from rising to falling or falling to rising, the ends of the image 72A and the pupil division image 70C. The intersection with and is defined as a peak P (peak P1, peak P2).
 このため、図7Aに示す瞳分割画像70Cを取得した場合、第1算出部60Dは、互いに異なるピークP(ピークP1、ピークP2)を有する領域(領域EAと領域EB)の数である“2”に、“1”を加算した数である“3”を、Zスタック数として算出する。 Therefore, when the pupil division image 70C shown in FIG. 7A is acquired, the first calculation unit 60D is the number of regions (regions EA and EB) having different peaks P (peaks P1 and P2). "3", which is the sum of "1" and "1", is calculated as the number of Z stacks.
 図7Bに示すように、例えば、瞳分割像72を構成する像72Aが、像72Aのライン重心分布gに3つの交点Iで交差する波形によって表される場合を想定する。この場合、像72Aの光強度分布は、ライン重心分布gによって4つの領域EC、領域ED、領域EE、領域EFに分割される。領域ECのピークP3と領域EFのピークP5とは、ライン重心分布gからの距離が同じであるものとする。領域ECのピークP3と、領域EDのピークP4と、領域EFのピークP6とは、ライン重心分布gからの距離が異なるものとする。 As shown in FIG. 7B, for example, it is assumed that the image 72A constituting the pupil division image 72 is represented by a waveform that intersects the line center of gravity distribution g of the image 72A at three intersections I. In this case, the light intensity distribution of the image 72A is divided into four regions EC, region ED, region EE, and region EF by the line center of gravity distribution g. It is assumed that the peak P3 of the region EC and the peak P5 of the region EF have the same distance from the line center of gravity distribution g. It is assumed that the peak P3 of the region EC, the peak P4 of the region ED, and the peak P6 of the region EF are different in distance from the line center of gravity distribution g.
 このため、図7Bに示す瞳分割画像70Dを取得した場合、第1算出部60Dは、互いに異なるピーク(ピークP3、ピークP4、ピークP6)を有する領域(領域EC、領域ED、および領域EF)の数である“3”に、“1”を加算した数である“4”を、Zスタック数として算出する。 Therefore, when the pupil division image 70D shown in FIG. 7B is acquired, the first calculation unit 60D has regions (region EC, region ED, and region EF) having different peaks (peak P3, peak P4, peak P6). "4", which is the number obtained by adding "1" to "3", which is the number of Z stacks, is calculated as the number of Z stacks.
 図7Cに示すように、例えば、瞳分割像72を構成する像72Aが、像72Aのライン重心分布gに1つの交点Iで交差した後にライン重心分布gに戻る波形によって表される場合を想定する。この場合、像72Aの光強度分布は、ライン重心分布gによって2つの領域EGおよび領域EHに分割される。領域EGのピークP7と領域EHのピークP8とは、ライン重心分布gからの距離が異なる。 As shown in FIG. 7C, for example, it is assumed that the image 72A constituting the pupil division image 72 is represented by a waveform that returns to the line center of gravity distribution g after intersecting the line center of gravity distribution g of the image 72A at one intersection I. do. In this case, the light intensity distribution of the image 72A is divided into two regions EG and EH by the line center of gravity distribution g. The peak P7 of the region EG and the peak P8 of the region EH are different in distance from the line center of gravity distribution g.
 このため、図7Cに示す瞳分割画像70Eを取得した場合、第1算出部60Dは、互いに異なるピーク(ピークP7、ピークP8)を有する領域(領域EGおよび領域EH)の数である“2”に、“1”を加算した数である“3”を、Zスタック数として算出する。 Therefore, when the pupil division image 70E shown in FIG. 7C is acquired, the first calculation unit 60D is “2”, which is the number of regions (regions EG and EH) having different peaks (peaks P7 and P8). And "3", which is the number obtained by adding "1", is calculated as the number of Z stacks.
 図3に戻り説明を続ける。算出部60Cの第2算出部60Eは、Zスタック間隔を算出する。 Return to Fig. 3 and continue the explanation. The second calculation unit 60E of the calculation unit 60C calculates the Z stack interval.
 本実施形態では、第2算出部60Eは、瞳分割画像70に含まれる瞳分割像72を用いて、Zスタック間隔を算出する。 In the present embodiment, the second calculation unit 60E calculates the Z stack interval using the pupil division image 72 included in the pupil division image 70.
 瞳分割像72を構成する像72Aと像72Bとの間隔は、対物レンズ22と検体Tとの焦点距離に比例する。図8は、焦点距離と、像72Aと像72Bとの間隔と、の関係の一例を示す線図76である。図9は、瞳分割画像70Fの一例の模式図である。瞳分割画像70Fは、瞳分割画像70の一例である。 The distance between the image 72A and the image 72B constituting the pupil division image 72 is proportional to the focal length between the objective lens 22 and the sample T. FIG. 8 is a diagram 76 showing an example of the relationship between the focal length and the distance between the image 72A and the image 72B. FIG. 9 is a schematic view of an example of the pupil split image 70F. The pupil division image 70F is an example of the pupil division image 70.
 図8および図9に示すように、瞳分割画像70Fに含まれる瞳分割像72を構成する像72Aと像72Bとの間隔Lは、対物レンズ22と検体Tとの焦点距離に比例する。像72Aと像72Bとの間隔Lは、像72Aのライン重心分布gであるライン重心分布gaと、像72Bのライン重心分布gであるライン重心分布gbと、の間隔によって表される。 As shown in FIGS. 8 and 9, the distance L between the image 72A and the image 72B constituting the pupil division image 72 included in the pupil division image 70F is proportional to the focal length between the objective lens 22 and the sample T. The distance L between the image 72A and the image 72B is represented by the distance between the line center of gravity distribution ga which is the line center of gravity distribution g of the image 72A and the line center of gravity distribution gb which is the line center of gravity distribution g of the image 72B.
 図3に戻り説明を続ける。そこで、第2算出部60Eは、瞳分割像72を構成する像72Aと像72Bとの間隔Lを算出する。 Return to Fig. 3 and continue the explanation. Therefore, the second calculation unit 60E calculates the distance L between the image 72A and the image 72B constituting the pupil division image 72.
 詳細には、第2算出部60Eは、下記式(1)~式(3)を用いて、間隔Lを算出する。 Specifically, the second calculation unit 60E calculates the interval L using the following equations (1) to (3).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 式(1)中、[Ytt,Ytb]は、図5に示すように、像72Aの光強度分布の、YA軸方向の範囲Rを意味する。式(1)~式(3)中、Yttは、像72Aの光強度分布の、YA軸方向の上端R1を意味する。Ytbは、像72Aの光強度分布の、YA軸方向の下端R2を意味する。式(1)~式(3)中、Wは、瞳分割画像70の画素幅を意味する。画素幅とは、測定対象領域24における、瞳分割画像70の撮像範囲のX軸方向またはY軸方向の1画素分の幅を意味する。本実施形態では、瞳分割画像70の撮像範囲のX軸方向およびY軸方向の1画素分の幅は、同じであると想定して説明する。 In the formula (1), [Ytt, Ytb] means the range R of the light intensity distribution of the image 72A in the YA axis direction, as shown in FIG. In the formulas (1) to (3), Ytt means the upper end R1 of the light intensity distribution of the image 72A in the YA axis direction. Ytb means the lower end R2 of the light intensity distribution of the image 72A in the YA axis direction. In the formulas (1) to (3), W means the pixel width of the pupil-divided image 70. The pixel width means the width of one pixel in the X-axis direction or the Y-axis direction of the imaging range of the pupil-divided image 70 in the measurement target area 24. In the present embodiment, it is assumed that the width of one pixel in the X-axis direction and the Y-axis direction of the imaging range of the pupil-divided image 70 is the same.
 式(1)~式(3)中、Ablackは、瞳分割画像70における、瞳分割像72の受光領域以外の領域の黒レベル平均画素値である。式(1)~式(3)中、Adifは、瞳分割画像70における、瞳分割像72の受光領域以外の領域のノイズレベルである。 In the formulas (1) to (3), A black is the black level average pixel value of the region other than the light receiving region of the pupil division image 72 in the pupil division image 70. In the formulas (1) to (3), A def is a noise level in the region other than the light receiving region of the pupil division image 72 in the pupil division image 70.
 第2算出部60Eは、上記式(1)を用いて、像72Aの範囲Rを算出する。また、第1算出部60Dは、像72Aと同様にして、像72Bの範囲Rを算出する。 The second calculation unit 60E calculates the range R of the image 72A using the above formula (1). Further, the first calculation unit 60D calculates the range R of the image 72B in the same manner as the image 72A.
 次に、第2算出部60Eは、像72Aのライン重心分布gを、式(2)を用いて算出する。式(2)中、Ytcは、像72Aのライン重心分布gを示す。また、第2算出部60Eは、像72Bのライン重心分布gを、像72Aと同様にして算出する。 Next, the second calculation unit 60E calculates the line center of gravity distribution g of the image 72A using the equation (2). In formula (2), Ytc shows the line center of gravity distribution g of image 72A. Further, the second calculation unit 60E calculates the line center of gravity distribution g of the image 72B in the same manner as in the image 72A.
 そして、第2算出部60Eは、像72Aのライン重心分布gと、像72Bのライン重心分布gと、の間隔Lを、式(3)を用いて算出する。式(3)中、Yphaseは、像72Aのライン重心分布gと像72Bのライン重心分布gとの間隔Lを示す。Ybcは、像72Bのライン重心分布gを示す。Ytcは、像72Aのライン重心分布gを示す。 Then, the second calculation unit 60E calculates the interval L between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B using the equation (3). In the formula (3), Y phase indicates the distance L between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B. Ybc shows the line center of gravity distribution g of the image 72B. Ytc shows the line center of gravity distribution g of image 72A.
 本実施形態では、第2算出部60Eは、像72Aのライン重心分布gと像72Bのライン重心分布gとの間隔を、瞳分割像72を瞳分割像72の長手方向(XA軸方向)に沿って分割した分割領域ごとに算出する。 In the present embodiment, the second calculation unit 60E sets the distance between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B in the longitudinal direction (XA axis direction) of the pupil division image 72. It is calculated for each divided area divided along the line.
 図10は、分割領域ごとの間隔Lの算出の説明図である。図10には、瞳分割画像70Gを一例として示した。瞳分割画像70Gは、瞳分割画像70の一例である。 FIG. 10 is an explanatory diagram for calculating the interval L for each divided region. FIG. 10 shows a pupil split image 70G as an example. The pupil division image 70G is an example of the pupil division image 70.
 例えば、第1算出部60Dが、Zスタック数として“3”を算出したと想定する。この場合、第2算出部60Eは、瞳分割画像70に含まれる瞳分割像72を、瞳分割像72の長手方向であるXA軸方向に沿って、3つの分割領域E(分割領域E1~分割領域E3)に分割する。そして、第2算出部60Eは、分割領域Eごとに、瞳分割像72を構成する像72Aと像72Bとの像間の間隔Lを、上記式(1)~式(3)を用いて算出する。 For example, it is assumed that the first calculation unit 60D calculates "3" as the number of Z stacks. In this case, the second calculation unit 60E divides the pupil division image 72 included in the pupil division image 70 into three division regions E (division areas E1 to division) along the XA axis direction which is the longitudinal direction of the pupil division image 72. Divide into area E3). Then, the second calculation unit 60E calculates the distance L between the images 72A and the image 72B constituting the pupil division image 72 for each division region E by using the above equations (1) to (3). do.
 このため、図10に示す例の場合、第2算出部60Eは、分割領域E1についてはライン重心分布ga1とライン重心分布gb1との間隔L1、分割領域E2についてはライン重心分布ga2とライン重心分布gb2との間隔L2、分割領域E3についてはライン重心分布ga3とライン重心分布gb3との間隔L3を、間隔Lとして算出する。 Therefore, in the case of the example shown in FIG. 10, the second calculation unit 60E has the interval L1 between the line center of gravity distribution ga1 and the line center of gravity distribution gb1 for the divided region E1, and the line center of gravity distribution ga2 and the line center of gravity distribution for the divided region E2. For the interval L2 with gb2 and the divided region E3, the interval L3 between the line center of gravity distribution ga3 and the line center of gravity distribution gb3 is calculated as the interval L.
 図3に戻り説明を続ける。第2算出部60Eは、間隔Lに基づいてZスタック間隔を算出する。 Return to Fig. 3 and continue the explanation. The second calculation unit 60E calculates the Z stack interval based on the interval L.
 例えば、第2算出部60Eは、間隔Lと基準間隔との差に対応する焦点調整量を、Zスタック間隔として算出する。基準間隔とは、対物レンズ22の焦点が検体Tに合焦しているときの、像72Aと像72Bとの間隔Lである。基準間隔は、公知のコントラスト法などを用いて予め測定すればよい。第2算出部60Eは、間隔Lと基準間隔との差と、該間隔Lの像72Aと像72Bとが得られたときに検体Tに焦点を合焦させるための焦点調整量と、の関係を表す関係情報を予め算出し、記憶部62に記憶する。そして、第2算出部60Eは、間隔Lと基準間隔との差と、該関係情報と、を用いて、焦点調整量を算出する。第2算出部60Eは、算出した焦点調整量をZスタック間隔として算出すればよい。 For example, the second calculation unit 60E calculates the focus adjustment amount corresponding to the difference between the interval L and the reference interval as the Z stack interval. The reference distance is the distance L between the image 72A and the image 72B when the focal point of the objective lens 22 is in focus on the sample T. The reference interval may be measured in advance using a known contrast method or the like. The second calculation unit 60E has a relationship between the difference between the interval L and the reference interval and the amount of focus adjustment for focusing on the sample T when the images 72A and 72B of the interval L are obtained. The relational information representing the above is calculated in advance and stored in the storage unit 62. Then, the second calculation unit 60E calculates the focus adjustment amount by using the difference between the interval L and the reference interval and the relational information. The second calculation unit 60E may calculate the calculated focus adjustment amount as the Z stack interval.
 なお、第2算出部60Eは、分割領域Eごとに間隔Lを算出した場合には、分割領域Eの各々の間隔Lに基づいて、分割領域EごとにZスタック間隔を算出すればよい。 When the second calculation unit 60E calculates the interval L for each division area E, the second calculation unit 60E may calculate the Z stack interval for each division area E based on each interval L of the division area E.
 焦点位置制御部60Gは、算出部60Cによって算出された、Zスタック数およびZスタック間隔に応じて、対物レンズ22を含む光学系および対物レンズ22の少なくとも一方を駆動させる。 The focal position control unit 60G drives at least one of the optical system including the objective lens 22 and the objective lens 22 according to the number of Z stacks and the Z stack interval calculated by the calculation unit 60C.
 例えば、焦点位置制御部60Gは、測定対象領域24にライン照明LAを集光させるフォーカスレンズである対物レンズ22を駆動する第2駆動部46を駆動制御する。この駆動制御によって、焦点位置制御部60Gは、対物レンズ22の焦点をZ軸方向に移動させる。 For example, the focus position control unit 60G drives and controls the second drive unit 46 that drives the objective lens 22, which is a focus lens that focuses the line illumination LA on the measurement target area 24. By this drive control, the focal position control unit 60G moves the focal point of the objective lens 22 in the Z-axis direction.
 例えば、焦点位置制御部60Gは、第2駆動部46を駆動制御することで、Zスタック数によって表されるステップ数、対物レンズ22のZ軸方向の位置を、ステージ26に近づく方向に向かって段階的に移動させる。なお、焦点位置制御部60Gは、第2駆動部46および第1駆動部44の少なくとも一方を駆動制御することで、対物レンズ22のZ軸方向の位置を移動させてもよい。 For example, the focal position control unit 60G drives and controls the second drive unit 46 to move the number of steps represented by the number of Z stacks and the position of the objective lens 22 in the Z-axis direction toward the stage 26. Move in stages. The focus position control unit 60G may move the position of the objective lens 22 in the Z-axis direction by driving and controlling at least one of the second drive unit 46 and the first drive unit 44.
 1ステップあたりのZスタック間隔は、予め定めた量であってもよい。焦点位置制御部60Gは、1ステップあたりのZスタック間隔として、算出部60Cで算出されたZスタック間隔を用いてもよい。本実施形態では、焦点位置制御部60Gは、算出部60Cで算出されたZスタック間隔を用いる形態を、一例として説明する。 The Z stack interval per step may be a predetermined amount. The focus position control unit 60G may use the Z stack interval calculated by the calculation unit 60C as the Z stack interval per step. In the present embodiment, the form in which the focus position control unit 60G uses the Z stack interval calculated by the calculation unit 60C will be described as an example.
 また、算出部60Cが分割領域EごとにZスタック間隔を算出した場合には、焦点位置制御部60Gは、各分割領域Eに対応するZスタック間隔を、1ステップあたりのZスタック間隔として用いればよい。 Further, when the calculation unit 60C calculates the Z stack interval for each division area E, the focus position control unit 60G may use the Z stack interval corresponding to each division area E as the Z stack interval per step. good.
 撮像画像取得部60Hは、焦点位置制御部60Gによるウォブリングのステップごとに、検体Tから発せられた光の撮像画像を撮像部34から取得する。 The captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T from the imaging unit 34 for each wobbling step by the focus position control unit 60G.
 焦点位置制御部60Gおよび撮像画像取得部60Hによる処理を詳細に説明する。 The processing by the focus position control unit 60G and the captured image acquisition unit 60H will be described in detail.
 図11は、本実施形態に係るウォブリングの一例の説明図である。図11には、ステージ26上に載置された測定対象領域24のXZ断面の一例を示した。図11には、Zスタック数が“3”である場合を一例として示した。また、図11には、3つの分割領域Eの各々に対応するZスタック間隔として、互いに異なるZスタック間隔(ΔZ1~ΔZ3)が算出された形態を示した。 FIG. 11 is an explanatory diagram of an example of wobbling according to the present embodiment. FIG. 11 shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26. FIG. 11 shows a case where the number of Z stacks is “3” as an example. Further, FIG. 11 shows a form in which different Z stack intervals (ΔZ1 to ΔZ3) are calculated as Z stack intervals corresponding to each of the three divided regions E.
 例えば、焦点位置制御部60Gは、測定対象領域24の厚み方向(Z軸方向)における焦点位置FPを、該厚み方向に、Zスタック数である3回移動させる(焦点位置FP1~焦点位置FP3参照)。そして、焦点位置制御部60Gは、焦点位置FPをZスタック数である3回移動させるごとに、撮像範囲Sを、測定対象領域24におけるライン照明LAの照射領域の長手方向であるX軸方向に向かって移動させる。 For example, the focal position control unit 60G moves the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 three times, which is the number of Z stacks, in the thickness direction (see focal position FP1 to focal position FP3). ). Then, each time the focal position control unit 60G moves the focal position FP three times, which is the number of Z stacks, the imaging range S is set to the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24. Move towards.
 撮像範囲Sとは、撮像部34の撮像範囲を意味する。焦点位置制御部60Gは、撮像範囲Sを、X軸方向に移動させる。例えば、焦点位置制御部60Gは、第1駆動部44を駆動制御することで、撮像部34の撮像範囲SをX軸方向に沿って撮像範囲Sの幅W、移動させる。なお、幅Wは、上述した画素幅と同じ意味である。 The imaging range S means the imaging range of the imaging unit 34. The focal position control unit 60G moves the imaging range S in the X-axis direction. For example, the focal position control unit 60G drives and controls the first drive unit 44 to move the image pickup range S of the image pickup unit 34 along the X-axis direction by the width W of the image pickup range S. The width W has the same meaning as the pixel width described above.
 この場合、焦点位置制御部60Gは、測定対象領域24の厚み方向にZスタック数である3回、焦点位置FPを移動させる。この焦点位置FPの移動によって、焦点位置制御部60Gは、撮像部34の撮像範囲Sを測定対象領域24の厚み方向に移動させる。そして、焦点位置制御部60Gは、3回焦点位置FPを厚み方向に移動させるごとに、撮像範囲SをX軸方向に、撮像範囲Sの幅W、移動させる。そして、焦点位置制御部60Gは、これらの一連の処理を繰返し実行する。 In this case, the focus position control unit 60G moves the focus position FP three times, which is the number of Z stacks, in the thickness direction of the measurement target area 24. By moving the focal position FP, the focal position control unit 60G moves the imaging range S of the imaging unit 34 in the thickness direction of the measurement target region 24. Then, the focal position control unit 60G moves the imaging range S in the X-axis direction by the width W of the imaging range S each time the focal position FP is moved three times in the thickness direction. Then, the focus position control unit 60G repeatedly executes these series of processes.
 また、焦点位置制御部60Gは、分割領域E1については、Zスタック間隔ΔZ1ずつ対物レンズ22の焦点位置FPを移動させる。また、焦点位置制御部60Gは、分割領域E2については、Zスタック間隔ΔZ2ずつ、対物レンズ22の焦点位置FPを移動させる。また、焦点位置制御部60Gは、分割領域E3については、Zスタック間隔ΔZ3ずつ、対物レンズ22の焦点位置FPを移動させる。 Further, the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ΔZ1 for the divided region E1. Further, the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ΔZ2 for the divided region E2. Further, the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ΔZ3 for the divided region E3.
 これらの処理により、焦点位置制御部60Gは、焦点位置FPを焦点位置FP1~焦点位置FP9の各々へ移動させながら、撮像部34の撮像範囲Sを図11中の撮像範囲S1~撮像範囲S9の位置に順に移動させる。 By these processes, the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. Move to the position in order.
 そして、撮像画像取得部60Hは、焦点位置FPすなわち撮像範囲Sを移動させるごとに、検体Tから発せられた光の撮像画像を取得する。このため、撮像画像取得部60Hは、測定対象領域24における、撮像範囲S1~撮像範囲S9の各々の撮像範囲Sについて、合焦した撮像画像を取得する。 Then, the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
 なお、焦点位置制御部60Gは、測定対象領域24の厚み方向(Z軸方向)における焦点位置FPを1回厚み方向に移動させるごとに、X軸方向に沿って撮像範囲Sを、撮像範囲Sの幅WをZスタック数で除算した移動量(W/N)ずつ移動させてもよい。Wは上記幅Wを示し、NはZスタック数を示す。この場合、第1算出部60Dは、幅Wを、上記処理により算出したZスタック数で除算することで、焦点の移動量、すなわち、撮像範囲Sの移動量を算出すればよい。 The focal position control unit 60G sets the imaging range S along the X-axis direction each time the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 is moved once in the thickness direction. The width W of the above may be moved by the movement amount (W / N) obtained by dividing by the number of Z stacks. W indicates the width W, and N indicates the number of Z stacks. In this case, the first calculation unit 60D may calculate the movement amount of the focal point, that is, the movement amount of the imaging range S by dividing the width W by the number of Z stacks calculated by the above process.
 図12Aは、本実施形態に係るウォブリングの一例の説明図である。図12Aには、ステージ26上に載置された測定対象領域24のXZ断面の一例を示した。また、図12Aには、Zスタック数が“3”であり、撮像範囲Sの幅Wを1/3ずつX軸方向に移動させる形態を示した。 FIG. 12A is an explanatory diagram of an example of wobbling according to the present embodiment. FIG. 12A shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 12A shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the X-axis direction.
 この場合、焦点位置制御部60Gは、焦点位置FPを1回厚み方向に移動させるごとに、X軸方向に沿って撮像範囲Sを距離W/N移動させる。焦点位置制御部60Gは、この一連の処理を、測定対象領域24の表面からステージ26に近づく方向に向かって、Zスタック数である3回実行する。そして、焦点位置制御部60Gは、3回焦点位置FPを厚み方向に移動させるごとに、撮像範囲Sを測定対象領域24の表面側に戻すと共に、X軸方向に沿って撮像範囲Sを距離W/N移動させる。そして、焦点位置制御部60Gは、これらの一連の処理を繰返し実行する。 In this case, the focal position control unit 60G moves the imaging range S by a distance W / N along the X-axis direction each time the focal position FP is moved in the thickness direction once. The focal position control unit 60G executes this series of processes three times, which is the number of Z stacks, from the surface of the measurement target region 24 toward the stage 26. Then, each time the focal position control unit 60G moves the focal position FP three times in the thickness direction, the imaging range S is returned to the surface side of the measurement target region 24, and the imaging range S is set to a distance W along the X-axis direction. / N Move. Then, the focus position control unit 60G repeatedly executes these series of processes.
 これらの処理により、焦点位置制御部60Gは、焦点位置FPを焦点位置FP1~焦点位置FP9の各々へ移動させながら、撮像部34の撮像範囲Sを図12A中の撮像範囲S1~撮像範囲S9の位置に順に移動させる。 By these processes, the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. 12A. Move to the position in order.
 そして、撮像画像取得部60Hは、焦点位置FPすなわち撮像範囲Sを移動させるごとに、検体Tから発せられた光の撮像画像を取得する。このため、撮像画像取得部60Hは、測定対象領域24における、撮像範囲S1~撮像範囲S9の各々の撮像範囲Sについて、合焦した撮像画像を取得する。 Then, the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
 なお、焦点位置制御部60Gは、測定対象領域24における、ライン照明LAの照射領域の長手方向であるX軸方向に沿って、対物レンズ22の焦点を一端側から他端側まで移動させるごとに、焦点位置FPを測定対象領域24の厚み方向(Z軸方向)に移動させてもよい。 The focal position control unit 60G moves the focal point of the objective lens 22 from one end side to the other end side along the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24. , The focal position FP may be moved in the thickness direction (Z-axis direction) of the measurement target region 24.
 図12Bは、本実施形態に係るウォブリングの一例の説明図である。図12Bには、ステージ26上に載置された測定対象領域24のXZ断面の一例を示した。また、図12Bには、Zスタック数が“3”であり、撮像範囲Sの幅Wを1/3ずつX軸方向に移動させる形態を示した。WおよびNは、上記と同様である。 FIG. 12B is an explanatory diagram of an example of wobbling according to the present embodiment. FIG. 12B shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 12B shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the X-axis direction. W and N are the same as above.
 この場合、焦点位置制御部60Gは、測定対象領域24におけるライン照明LAの照射領域の長手方向であるX軸方向に沿って、撮像範囲Sを幅Wずつ移動させる。そして、焦点位置制御部60Gは、撮像範囲Sを測定対象領域24のX軸方向の一端から他端まで移動させるごとに、焦点位置FPを測定対象領域24の厚み方向にZスタック間隔移動させ、且つ、該撮像範囲SをX軸方向に距離W/N移動させる一連の処理を、Zスタック数である3回実行する。 In this case, the focal position control unit 60G moves the imaging range S by the width W along the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24. Then, each time the focus position control unit 60G moves the imaging range S from one end to the other end of the measurement target area 24 in the X-axis direction, the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target area 24 by a Z stack interval. In addition, a series of processes for moving the imaging range S by a distance W / N in the X-axis direction is executed three times, which is the number of Z stacks.
 この場合、焦点位置制御部60Gは、焦点位置FPを焦点位置FP1~焦点位置FP9の各々へ移動させながら、撮像部34の撮像範囲Sを図12B中の撮像範囲S1~撮像範囲S9の位置に順に移動させる。 In this case, the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 to the positions of the imaging range S1 to S9 in FIG. 12B. Move in order.
 そして、撮像画像取得部60Hは、焦点位置FPすなわち撮像範囲Sを移動させるごとに、検体Tから発せられた光の撮像画像を取得する。このため、撮像画像取得部60Hは、測定対象領域24における、撮像範囲S1~撮像範囲S9の各々の撮像範囲Sについて、合焦した撮像画像を取得する。 Then, the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
 なお、焦点位置制御部60Gは、測定対象領域24の厚み方向(Z軸方向)における焦点位置FPを1回厚み方向に移動させるごとに、X軸方向に対して交差するY軸方向に沿って、撮像範囲Sを、撮像範囲Sの幅WをZスタック数で除算した距離(W/N)ずつ移動させてもよい。この場合の幅Wは、撮像範囲SにおけるY軸方向の幅とすればよい。Y軸方向は、上述したように、ライン照明LAの走査方向に相当する。 The focal position control unit 60G moves the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 once in the thickness direction along the Y-axis direction intersecting the X-axis direction. , The imaging range S may be moved by a distance (W / N) obtained by dividing the width W of the imaging range S by the number of Z stacks. The width W in this case may be the width in the Y-axis direction in the imaging range S. As described above, the Y-axis direction corresponds to the scanning direction of the line illumination LA.
 図13Aは、本実施形態に係るウォブリングの一例の説明図である。図13Aには、ステージ26上に載置された測定対象領域24のYZ断面の一例を示した。また、図13Aには、Zスタック数が“3”であり、撮像範囲Sの幅Wを1/3ずつY軸方向に移動させる形態を示した。 FIG. 13A is an explanatory diagram of an example of wobbling according to the present embodiment. FIG. 13A shows an example of a YZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 13A shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the Y-axis direction.
 この場合、焦点位置制御部60Gは、焦点位置FPを1回厚み方向に移動させるごとに、Y軸方向に沿って撮像範囲Sを距離W/N移動させる。焦点位置制御部60Gは、この一連の処理を、測定対象領域24の表面からステージ26に近づく方向に向かって、Zスタック数である3回実行する。そして、焦点位置制御部60Gは、3回焦点位置FPを厚み方向に移動させるごとに、撮像範囲Sを測定対象領域24の表面側に戻すと共に、Y軸方向に沿って撮像範囲Sを距離W/N移動させる。そして、焦点位置制御部60Gは、これらの一連の処理を繰返し実行する。 In this case, the focal position control unit 60G moves the imaging range S by a distance W / N along the Y-axis direction each time the focal position FP is moved in the thickness direction once. The focal position control unit 60G executes this series of processes three times, which is the number of Z stacks, from the surface of the measurement target region 24 toward the stage 26. Then, each time the focal position control unit 60G moves the focal position FP three times in the thickness direction, the imaging range S is returned to the surface side of the measurement target region 24, and the imaging range S is set to a distance W along the Y-axis direction. / N Move. Then, the focus position control unit 60G repeatedly executes these series of processes.
 これらの処理により、焦点位置制御部60Gは、焦点位置FPを焦点位置FP1~焦点位置FP9の各々へ移動させながら、撮像部34の撮像範囲Sを図13A中の撮像範囲S1~撮像範囲S9の位置に順に移動させる。 By these processes, the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. 13A. Move to the position in order.
 そして、撮像画像取得部60Hは、焦点位置FPすなわち撮像範囲Sを移動させるごとに、検体Tから発せられた光の撮像画像を取得する。このため、撮像画像取得部60Hは、測定対象領域24における、撮像範囲S1~撮像範囲S9の各々の撮像範囲Sについて、合焦した撮像画像を取得する。 Then, the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
 なお、焦点位置制御部60Gは、測定対象領域24における、ライン照明LAの走査方向であるY軸方向に沿って、対物レンズ22の焦点位置を一端側から他端側まで移動させるごとに、焦点位置FPを測定対象領域24の厚み方向(Z軸方向)に移動させてもよい。 The focal position control unit 60G focuses each time the focal position of the objective lens 22 is moved from one end side to the other end side along the Y-axis direction, which is the scanning direction of the line illumination LA, in the measurement target area 24. The position FP may be moved in the thickness direction (Z-axis direction) of the measurement target region 24.
 図13Bは、本実施形態に係るウォブリングの一例の説明図である。図13Bには、ステージ26上に載置された測定対象領域24のYZ断面の一例を示した。また、図13Bには、Zスタック数が“3”であり、撮像範囲Sの幅Wを1/3ずつY軸方向に移動させる形態を示した。WおよびNは、上記と同様である。 FIG. 13B is an explanatory diagram of an example of wobbling according to the present embodiment. FIG. 13B shows an example of a YZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 13B shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the Y-axis direction. W and N are the same as above.
 この場合、焦点位置制御部60Gは、測定対象領域24におけるライン照明LAの走査方向であるY軸方向に沿って、撮像範囲Sを幅Wずつ移動させる。そして、焦点位置制御部60Gは、撮像範囲Sを測定対象領域24のY軸方向の一端から他端まで移動させるごとに、焦点位置FPを測定対象領域24の厚み方向にZスタック間隔移動させ、且つ、該撮像範囲SをY軸方向に距離W/N移動させる一連の処理を、Zスタック数である3回実行する。 In this case, the focal position control unit 60G moves the imaging range S by the width W along the Y-axis direction, which is the scanning direction of the line illumination LA in the measurement target area 24. Then, each time the focus position control unit 60G moves the imaging range S from one end to the other end of the measurement target area 24 in the Y-axis direction, the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target area 24 by a Z stack interval. In addition, a series of processes for moving the imaging range S by a distance W / N in the Y-axis direction is executed three times, which is the number of Z stacks.
 この場合、焦点位置制御部60Gは、焦点位置FPを焦点位置FP1~焦点位置FP9の各々へ移動させながら、撮像部34の撮像範囲Sを図13B中の撮像範囲S1~撮像範囲S9の位置に順に移動させる。 In this case, the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 to the positions of the imaging range S1 to S9 in FIG. 13B. Move in order.
 そして、撮像画像取得部60Hは、焦点位置FPすなわち撮像範囲Sを移動させるごとに、検体Tから発せられた光の撮像画像を取得する。このため、撮像画像取得部60Hは、測定対象領域24における、撮像範囲S1~撮像範囲S9の各々の撮像範囲Sについて、合焦した撮像画像を取得する。  Then, the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24. Twice
 図3に戻り説明を続ける。上記処理によって、撮像画像取得部60Hは、1つの測定対象領域24について、焦点位置FPの異なる複数の撮像範囲S(撮像範囲S1~撮像範囲S9)の撮像画像を取得することとなる。 Return to Fig. 3 and continue the explanation. By the above processing, the captured image acquisition unit 60H acquires captured images of a plurality of imaging ranges S (imaging range S1 to imaging range S9) having different focal positions FP for one measurement target region 24.
 撮像範囲S1~撮像範囲S9は、測定対象領域24の厚み方向(Z軸方向)およびXY平面に沿った方向の少なくとも一方の少なくとも一部の領域が互いに非重複な撮像範囲Sである。また、撮像範囲S1~撮像範囲S9の各々の撮像画像は、各々の撮像範囲Sを焦点位置FPとして撮像された撮像画像である。このため、図11~図13Bに示すように、検体Tが、光軸A2に直交する直交平面であるXY平面に対して傾斜して配置されている場合であっても、撮像画像取得部60Hは、検体Tの各位置に焦点を合わせた撮像画像を得る事ができる。 The imaging range S1 to the imaging range S9 is an imaging range S in which at least a part of at least one of the thickness direction (Z-axis direction) of the measurement target region 24 and the direction along the XY plane is non-overlapping with each other. Further, each of the captured images in the imaging range S1 to S9 is an captured image captured with each imaging range S as the focal position FP. Therefore, as shown in FIGS. 11 to 13B, even when the sample T is arranged at an angle with respect to the XY plane which is an orthogonal plane orthogonal to the optical axis A2, the captured image acquisition unit 60H Can obtain an image taken by focusing on each position of the sample T.
 出力制御部60Iは、撮像画像取得部60Hで取得した撮像画像を、通信部64を介してサーバ装置10などの外部装置へ出力する。なお、出力制御部60Iは、撮像画像取得部60Hで取得した撮像画像を、記憶部62へ記憶してもよい。また、出力制御部60Iは、撮像画像を、制御部60に接続されたディスプレイに出力してもよい。 The output control unit 60I outputs the captured image acquired by the captured image acquisition unit 60H to an external device such as the server device 10 via the communication unit 64. The output control unit 60I may store the captured image acquired by the captured image acquisition unit 60H in the storage unit 62. Further, the output control unit 60I may output the captured image to the display connected to the control unit 60.
 なお、出力制御部60Iは、撮像画像取得部60Hで取得した撮像画像を公知の方法で解析することで、検体Tの種類などを解析し、解析結果をサーバ装置10などへ出力してもよい。 The output control unit 60I may analyze the type of the sample T and the like by analyzing the captured image acquired by the captured image acquisition unit 60H by a known method, and output the analysis result to the server device 10 or the like. ..
 次に、本実施形態の制御装置16で実行する情報処理の流れの一例を説明する。 Next, an example of the flow of information processing executed by the control device 16 of the present embodiment will be described.
 図14は、制御装置16が実行する情報処理の流れの一例を示す、フローチャートである。 FIG. 14 is a flowchart showing an example of the flow of information processing executed by the control device 16.
 光源制御部60Aが、ライン照明LAを照射するように光源18Bを制御する(ステップS100)。ステップS100の制御によって、光源18Bからライン照明LAが照射され、検体Tにライン照明LAが照射される。 The light source control unit 60A controls the light source 18B so as to irradiate the line illumination LA (step S100). By the control of step S100, the line illumination LA is irradiated from the light source 18B, and the line illumination LA is irradiated to the sample T.
 位相差取得部60Bは、ライン照明LAを照射された検体Tから発せられた光の瞳分割像72を瞳分割像撮像部42から取得する(ステップS102)。ステップS102の処理によって、位相差取得部60Bは、瞳分割像72である像72Aおよび像72Bを、位相差として取得する。 The phase difference acquisition unit 60B acquires the pupil division image 72 of the light emitted from the sample T irradiated with the line illumination LA from the pupil division image imaging unit 42 (step S102). By the process of step S102, the phase difference acquisition unit 60B acquires the image 72A and the image 72B, which are the pupil division images 72, as the phase difference.
 第1算出部60Dは、ステップS102で取得した瞳分割像72のライン重心分布gに基づいて、Zスタック数を算出する(ステップS104)。 The first calculation unit 60D calculates the number of Z stacks based on the line center of gravity distribution g of the pupil division image 72 acquired in step S102 (step S104).
 第2算出部60Eは、ステップS102で取得した瞳分割像72を構成する像72Aと像72Bとの間隔Lを算出する(ステップS106)。 The second calculation unit 60E calculates the distance L between the image 72A and the image 72B constituting the pupil division image 72 acquired in step S102 (step S106).
 第2算出部60Eは、ステップS106で算出された間隔Lに基づいてZスタック間隔を算出する(ステップS108)。 The second calculation unit 60E calculates the Z stack interval based on the interval L calculated in step S106 (step S108).
 焦点位置制御部60Gは、焦点位置制御を実行する(ステップS110)。詳細には、焦点位置制御部60Gは、測定対象領域24の厚み方向(Z軸方向)またはXY平面に沿った方向における焦点位置FPを1回移動させるように駆動制御する。焦点位置制御部60Gは、測定対象領域24の厚み方向に焦点位置FPを移動させる場合には、ステップS108で算出されたZスタック間隔、焦点位置FPを厚み方向に移動させる。また、焦点位置制御部60Gは、図11~図13Bを用いて説明したように、焦点位置FP(撮像範囲S)を測定対象領域24の厚み方向(Z軸方向)またはXY平面に沿った方向に1回移動させる。 The focus position control unit 60G executes the focus position control (step S110). Specifically, the focus position control unit 60G drives and controls the focus position FP once in the thickness direction (Z-axis direction) of the measurement target region 24 or in the direction along the XY plane. When the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target region 24, the focus position control unit 60G moves the Z stack interval and the focus position FP calculated in step S108 in the thickness direction. Further, as described with reference to FIGS. 11 to 13B, the focal position control unit 60G sets the focal position FP (imaging range S) in the thickness direction (Z-axis direction) of the measurement target region 24 or in the direction along the XY plane. Move once to.
 例えば、焦点位置制御部60Gは、ステップS108で算出された、分割領域Eの各々に対応するZスタック間隔を用いて、対応する分割領域EのZスタック間隔、Z軸方向に対物レンズ22の焦点位置FPを移動させればよい。また、焦点位置制御部60Gは、Zスタック数については、測定対象領域24の全ての分割領域Eについて、ステップS104で算出したZスタック数を用いればよい。 For example, the focal position control unit 60G uses the Z stack spacing corresponding to each of the division regions E calculated in step S108, and uses the Z stack spacing of the corresponding division region E to focus the objective lens 22 in the Z axis direction. The position FP may be moved. Further, as for the number of Z stacks, the focus position control unit 60G may use the number of Z stacks calculated in step S104 for all the divided regions E of the measurement target region 24.
 撮像画像取得部60Hは、焦点位置制御部60Gが焦点位置FP(撮像範囲S)を移動させるごとに、検体Tから発せられた光の撮像画像を撮像部34から取得する(ステップS112)。出力制御部60Iは、ステップS112で取得した撮像画像を記憶部62へ記憶する(ステップS114)。 The captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T from the imaging unit 34 each time the focal position control unit 60G moves the focal position FP (imaging range S) (step S112). The output control unit 60I stores the captured image acquired in step S112 in the storage unit 62 (step S114).
 制御部60は、処理を終了するか否かを判断する(ステップS116)。例えば、制御部60は、測定対象領域24について、図11~図13Bを用いて説明した焦点位置FP1~焦点位置FP9の各々の撮像範囲S1~撮像範囲S9の撮像画像を取得したか否かを判断する。そして、制御部60は、これらの撮像画像の全てを取得したと判断した場合、肯定判断し(ステップS116:Yes)、本ルーチンを終了する。一方、制御部60は、否定判断すると(ステップS116:No)、上記ステップS110へ戻る。 The control unit 60 determines whether or not to end the process (step S116). For example, the control unit 60 determines whether or not the measurement target region 24 has acquired the captured images of the respective imaging ranges S1 to S9 of the focal position FP1 to the focal position FP9 described with reference to FIGS. 11 to 13B. to decide. Then, when the control unit 60 determines that all of these captured images have been acquired, it determines affirmatively (step S116: Yes) and ends this routine. On the other hand, when the control unit 60 determines negatively (step S116: No), the control unit 60 returns to the step S110.
 以上説明したように、本実施形態の顕微鏡システム1は、撮像部34と、照射部18と、ステージ26と、位相差取得部60Bと、焦点位置制御部60Gと、算出部60Cと、を備える。 As described above, the microscope system 1 of the present embodiment includes an imaging unit 34, an irradiation unit 18, a stage 26, a phase difference acquisition unit 60B, a focal position control unit 60G, and a calculation unit 60C. ..
 照射部18は、第1方向(X軸方向)に平行なライン照明LAを照射する。ステージ26は、検体Tを支持し、第1方向と垂直な第2方向(Y軸方向)に移動可能である。位相差取得部60Bは、ライン照明LAを照射されることにより検体Tから発せられた光の像(像72A、像72B)の位相差を取得する。焦点位置制御部60Gは、対物レンズ22を含む光学系またはステージ26を振動させて、第1方向および第2方向それぞれに鉛直な第3方向(Z軸方向)に対物レンズ22の焦点を移動させる。算出部60Cは、位相差に基づいて、振動の振幅、および、撮像部34による撮像条件、の少なくとも一方を算出する。 The irradiation unit 18 irradiates the line illumination LA parallel to the first direction (X-axis direction). The stage 26 supports the sample T and can move in the second direction (Y-axis direction) perpendicular to the first direction. The phase difference acquisition unit 60B acquires the phase difference of the light images (image 72A, image 72B) emitted from the sample T by being irradiated with the line illumination LA. The focal position control unit 60G vibrates the optical system including the objective lens 22 or the stage 26 to move the focal point of the objective lens 22 in the third direction (Z-axis direction) which is vertical in each of the first direction and the second direction. .. The calculation unit 60C calculates at least one of the vibration amplitude and the image pickup condition by the image pickup unit 34 based on the phase difference.
 ここで、検体Tが、光軸A2に直交する直交平面であるXY平面に対して傾斜して配置されている場合がある。また、検体Tの厚みが、一定ではない場合がある。 Here, the sample T may be arranged at an angle with respect to the XY plane, which is an orthogonal plane orthogonal to the optical axis A2. In addition, the thickness of the sample T may not be constant.
 従来技術としては、例えば、斜めに配置されたイメージャによって撮像され撮像画像から最適な焦点距離を特定する技術が開示されている。また、従来技術として、例えば、プレフォーカスのために合焦深さから一定の差を保ってZスタック画像を得る技術などが開示されている。 As a conventional technique, for example, a technique of identifying an optimum focal length from an image captured by an imager arranged at an angle is disclosed. Further, as a conventional technique, for example, a technique for obtaining a Z-stack image while maintaining a constant difference from the focusing depth for prefocus is disclosed.
 しかし、従来技術では、検体Tの各位置に合焦させた撮像が行われておらず、すなわち、検体TのX軸方向およびY軸方向における各位置に合焦させた撮像が行われておらず、検体Tの各位置に焦点を合わせた撮像画像を得る事は困難であった。また、従来では、可視光全体または青色でフォーカシングをして赤外光で観察する等の広帯域の顕微鏡撮影において、フォーカシングに用いる光の波長と観察波長が異なる場合に、レンズの色収差により撮影波長で焦点が合っていない場合があった。 However, in the prior art, imaging focused on each position of the sample T has not been performed, that is, imaging focused on each position of the sample T in the X-axis direction and the Y-axis direction has been performed. However, it was difficult to obtain an image captured by focusing on each position of the sample T. In addition, conventionally, in wideband microscopic photography such as focusing on the entire visible light or blue and observing with infrared light, when the wavelength of the light used for focusing and the observation wavelength are different, the shooting wavelength is determined by the chromatic aberration of the lens. Sometimes it was out of focus.
 一方、本実施形態の顕微鏡システム1では、ライン照明LAを照射されることにより検体Tから発せられた光の像の位相差に基づいて、対物レンズ22を含む光学系またはステージ26の振動の振幅、および、撮像部34による撮像条件、の少なくとも一方を算出する。 On the other hand, in the microscope system 1 of the present embodiment, the amplitude of vibration of the optical system including the objective lens 22 or the stage 26 is based on the phase difference of the image of the light emitted from the sample T by being irradiated with the line illumination LA. , And at least one of the imaging conditions by the imaging unit 34 is calculated.
 このため、本実施形態の顕微鏡システム1では、検体Tの厚み方向の各位置に合焦させた、撮像画像を得ることができる。 Therefore, in the microscope system 1 of the present embodiment, it is possible to obtain an captured image focused at each position in the thickness direction of the sample T.
 従って、本実施形態の顕微鏡システム1は、検体Tの各位置に合焦された高精度な撮像画像を得ることができる。 Therefore, the microscope system 1 of the present embodiment can obtain a highly accurate captured image focused on each position of the sample T.
 ここで、フォーカシング時に用いる光の波長と、測定対象領域24の観察時に用いる光の波長と、異なる場合、測定部14に含まれるレンズの色収差により、観察時に焦点が合っていない撮像画像が得られる場合があった。 Here, if the wavelength of the light used during focusing and the wavelength of light used during observation of the measurement target region 24 are different, an captured image that is out of focus during observation can be obtained due to the chromatic aberration of the lens included in the measurement unit 14. There was a case.
 一方、本実施形態の顕微鏡システム1では、瞳分割像72のライン重心分布gに基づいて、第3方向(Z軸方向)のサンプリング数であるZスタック数を含む撮像条件を算出する。そして、顕微鏡システム1では、算出された撮像条件に応じて、焦点位置FPを移動させ、焦点位置FPを移動させるごとに、検体Tから発せられた光の撮像画像を取得する。このため、本実施形態では、フォーカシングと撮像を交互に行うことができ、レンズの色収差をキャンセルした撮像画像を得ることができる。 On the other hand, in the microscope system 1 of the present embodiment, the imaging conditions including the number of Z stacks, which is the number of samples in the third direction (Z-axis direction), are calculated based on the line center of gravity distribution g of the pupil division image 72. Then, in the microscope system 1, the focal position FP is moved according to the calculated imaging conditions, and each time the focal position FP is moved, an captured image of the light emitted from the sample T is acquired. Therefore, in the present embodiment, focusing and imaging can be performed alternately, and an captured image in which the chromatic aberration of the lens is canceled can be obtained.
 このため、本実施形態の顕微鏡システム1は、上記効果に加えて、色収差の抑制された鮮明な撮像画像を得ることができる。 Therefore, in addition to the above effects, the microscope system 1 of the present embodiment can obtain a clear captured image in which chromatic aberration is suppressed.
 また、本実施形態の顕微鏡システム1は、ウォブリング時に照射する照明として、ライン照明LAを用いる。このため、ライン状の照明を用いない場合に比べて、測定対象領域24に対する光の照射時間を短くすることができる。よって、本実施形態の顕微鏡システム1は、上記効果に加えて、測定対象領域24に含まれる検体Tの退色を抑制することができる。 Further, the microscope system 1 of the present embodiment uses line illumination LA as the illumination to be irradiated at the time of wobbling. Therefore, the irradiation time of the light on the measurement target region 24 can be shortened as compared with the case where the line-shaped illumination is not used. Therefore, in addition to the above effects, the microscope system 1 of the present embodiment can suppress the fading of the sample T contained in the measurement target region 24.
(変形例)
 なお、上記実施形態では、焦点位置制御部60Gは、対物レンズ22と測定対象領域24との相対位置を移動させる第1駆動部44および第2駆動部46の少なくとも一方を駆動制御することで、焦点位置FPを移動させる形態を説明した。
(Modification example)
In the above embodiment, the focal position control unit 60G drives and controls at least one of the first drive unit 44 and the second drive unit 46 that move the relative positions of the objective lens 22 and the measurement target area 24. The mode of moving the focal position FP has been described.
 しかし、焦点位置制御部60Gは、対物レンズ22と測定対象領域24との間の光路長を変化させる部材を駆動制御することで、焦点位置FPを移動させてもよい。 However, the focal position control unit 60G may move the focal position FP by driving and controlling a member that changes the optical path length between the objective lens 22 and the measurement target region 24.
 図15は、本変形例の顕微鏡システム1Bの一例を示す模式図である。 FIG. 15 is a schematic view showing an example of the microscope system 1B of this modified example.
 顕微鏡システム1Bは、第3駆動部48および可変部材50を更に備える点以外は、上記顕微鏡システム1と同様の構成である。 The microscope system 1B has the same configuration as the microscope system 1 except that it further includes a third drive unit 48 and a variable member 50.
 可変部材50は、測定対象領域24にライン照明LAを集光させる対物レンズ22と測定対象領域24との間の光路長を変化させるための部材である。可変部材50は、例えば、対物レンズ22と測定対象領域24との間に配置されている。 The variable member 50 is a member for changing the optical path length between the objective lens 22 that concentrates the line illumination LA on the measurement target area 24 and the measurement target area 24. The variable member 50 is arranged, for example, between the objective lens 22 and the measurement target region 24.
 図16A~図16Cは、可変部材50の一例を示す模式図である。図16Aは、可変部材50の側面図の一例である。図16Bは、可変部材50の正面図の一例である。図16Cは、可変部材50の斜視図の一例である。 16A to 16C are schematic views showing an example of the variable member 50. FIG. 16A is an example of a side view of the variable member 50. FIG. 16B is an example of a front view of the variable member 50. FIG. 16C is an example of a perspective view of the variable member 50.
 可変部材50は、回転中心Cを回転軸として回転可能に支持された円盤状の部材である。可変部材50は、ライン照明LAを透過する材料で構成されていればよい。可変部材50は、例えば、光学ガラスによって構成されている。 The variable member 50 is a disk-shaped member rotatably supported with the rotation center C as the rotation axis. The variable member 50 may be made of a material that transmits line illumination LA. The variable member 50 is made of, for example, optical glass.
 可変部材50は、円周方向に沿って、互いに厚みの異なる複数の領域52(領域52A~領域52C)を備える。可変部材50の回転中心Cは、第3駆動部48に接続されている。第3駆動部48の駆動によって、可変部材50は回転中心Cを回転軸として矢印Q方向に回転する。 The variable member 50 includes a plurality of regions 52 (regions 52A to 52C) having different thicknesses along the circumferential direction. The rotation center C of the variable member 50 is connected to the third drive unit 48. By driving the third drive unit 48, the variable member 50 rotates in the direction of arrow Q with the rotation center C as the rotation axis.
 可変部材50の回転によって、ライン照明LAが透過する領域52(領域52A~52C)が変更されることで、光路長が変化する。 The rotation of the variable member 50 changes the region 52 (regions 52A to 52C) through which the line illumination LA passes, so that the optical path length changes.
 第3駆動部48は、制御装置16に電気的に接続されている。制御装置16の焦点位置制御部60Gは、第3駆動部48を駆動制御することで可変部材50を回転させ、対物レンズ22と測定対象領域24との間の光路長を変化させる。この駆動制御により、焦点位置制御部60Gは、焦点位置FPを移動させてもよい。 The third drive unit 48 is electrically connected to the control device 16. The focal position control unit 60G of the control device 16 drives and controls the third drive unit 48 to rotate the variable member 50 and change the optical path length between the objective lens 22 and the measurement target region 24. By this drive control, the focal position control unit 60G may move the focal position FP.
 なお、上記には、本開示の実施形態および変形例を説明したが、上述した実施形態および変形例に係る処理は、上記実施形態および変形例以外にも種々の異なる形態にて実施されてよい。また、上述してきた実施形態および変形例は、処理内容を矛盾させない範囲で適宜組み合わせることが可能である。 Although the embodiments and modifications of the present disclosure have been described above, the processes related to the above-described embodiments and modifications may be performed in various different forms other than the above-described embodiments and modifications. .. Further, the above-described embodiments and modifications can be appropriately combined as long as the processing contents do not contradict each other.
 また、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
(ハードウェア構成)
 図17は、上記実施形態および変形例に係る制御装置16の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。
(Hardware configuration)
FIG. 17 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of the control device 16 according to the above embodiment and the modified example.
 コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェース1500、及び入出力インターフェース1600を有する。コンピュータ1000の各部は、バス1050によって接続される。 The computer 1000 has a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600. Each part of the computer 1000 is connected by a bus 1050.
 CPU1100は、ROM1300又はHDD1400に格納されたプログラムに基づいて動作し、各部の制御を行う。例えば、CPU1100は、ROM1300又はHDD1400に格納されたプログラムをRAM1200に展開し、プログラムに対応した処理を実行する。 The CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes a process corresponding to the program.
 ROM1300は、コンピュータ1000の起動時にCPU1100によって実行されるBIOS(Basic Input Output System)等のブートプログラムや、コンピュータ1000のハードウェアに依存するプログラム等を格納する。 The ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program depending on the hardware of the computer 1000, and the like.
 HDD1400は、CPU1100によって実行されるプログラム、及び、かかるプログラムによって使用されるデータ等を非一時的に記録する、コンピュータが読み取り可能な記録媒体である。具体的には、HDD1400は、プログラムデータ1450の一例である本開示に係る焦点調整プログラムを記録する記録媒体である。 The HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program. Specifically, the HDD 1400 is a recording medium for recording the focus adjustment program according to the present disclosure, which is an example of the program data 1450.
 通信インターフェース1500は、コンピュータ1000が外部ネットワーク1550(例えばインターネット)と接続するためのインターフェースである。例えば、CPU1100は、通信インターフェース1500を介して、他の機器からデータを受信したり、CPU1100が生成したデータを他の機器へ送信する。 The communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet). For example, the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
 入出力インターフェース1600は、入出力デバイス1650とコンピュータ1000とを接続するためのインターフェースである。例えば、CPU1100は、入出力インターフェース1600を介して、キーボードやマウス等の入力デバイスからデータを受信する。また、CPU1100は、入出力インターフェース1600を介して、ディスプレイやスピーカやプリンタ等の出力デバイスにデータを送信する。また、入出力インターフェース1600は、所定の記録媒体(メディア)に記録されたプログラム等を読み取るメディアインターフェイスとして機能してもよい。メディアとは、例えばDVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto-Optical disk)等の光磁気記録媒体、テープ媒体、磁気記録媒体、または半導体メモリ等である。 The input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000. For example, the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media). The media includes, for example, an optical recording medium such as a DVD (Digital Versailles Disc), a PD (Phase change rewritable Disc), a magneto-optical recording medium such as an MO (Magnet-Optical disc), a tape medium, a magnetic recording medium, a semiconductor memory, or the like. Is.
 例えば、コンピュータ1000が上記実施形態に係る制御装置16として機能する場合、コンピュータ1000のCPU1100は、RAM1200上にロードされたプログラムを実行することにより、光源制御部60A、位相差取得部60B、算出部60C、算出部60C、第1算出部60D、第2算出部60E、焦点位置制御部60G、撮像画像取得部60H、および出力制御部60I、等の機能を実現する。また、HDD1400には、本開示に係るプログラムおよびデータが格納される。なお、CPU1100は、プログラムデータ1450をHDD1400から読み取って実行するが、他の例として、外部ネットワーク1550を介して、他の装置からこれらのプログラムを取得してもよい。 For example, when the computer 1000 functions as the control device 16 according to the above embodiment, the CPU 1100 of the computer 1000 executes the program loaded on the RAM 1200, thereby executing the light source control unit 60A, the phase difference acquisition unit 60B, and the calculation unit. Functions such as 60C, calculation unit 60C, first calculation unit 60D, second calculation unit 60E, focus position control unit 60G, captured image acquisition unit 60H, and output control unit 60I are realized. In addition, the program and data related to the present disclosure are stored in the HDD 1400. The CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program, but as another example, these programs may be acquired from another device via the external network 1550.
 なお、本技術は以下のような構成も取ることができる。
(1)
 撮像部と、
 第1方向に平行なライン照明を照射する照射部と、
 検体を支持し、前記第1方向と垂直な第2方向に移動可能なステージと、
 前記ライン照明を照射されることにより検体から発せられた光の像の位相差を取得する位相差取得部と、
 対物レンズを含む光学系またはステージを振動させて、前記第1方向および前記第2方向それぞれに鉛直な第3方向に前記対物レンズの焦点を移動させる焦点位置制御部と、
 前記位相差に基づいて、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出する算出部と、
 を備える顕微鏡システム。
(2)
 前記算出部は、
 前記位相差に基づいてライン重心分布を算出し、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出する、
 (1)に記載の顕微鏡システム。
(3)
 前記位相差取得部は、
 前記検体から発せられた光の像の瞳分割像を前記位相差として取得する、(1)または(2)に記載の顕微鏡システム。
(4)
 前記算出部は、
 前記位相差に基づいて、前記第3方向のサンプリング数を含む前記撮像条件を算出する、
 (2)に記載の顕微鏡システム。
(5)
 前記算出部は、
 前記像における、前記ライン重心分布によって分割され且つ前記ライン重心分布からの距離が互いに異なるピークを有する領域の数に、1を加算した数を、前記サンプリング数として導出する、
 (4)に記載の顕微鏡システム。
(6)
 前記算出部は、
 前記検体から発せられた光の像の瞳分割像を長手方向に沿って前記サンプリング数に分割した分割領域ごとの前記位相差に基づいて、前記分割領域ごとに前記振幅を算出する、
 (4)に記載の顕微鏡システム。
(7)
 前記焦点位置制御部は、
 前記対物レンズを含む光学系およびステージの少なくとも一方を移動させて、前記第1方向または前記第2方向に前記対物レンズの焦点を移動させ、
 前記算出部は、
 前記位相差に基づいて、前記第1方向または前記第2方向への前記焦点の移動量を含む、前記撮像条件を算出する、
 (1)~(6)の何れか1つに記載の顕微鏡システム。
(8)
 前記焦点位置制御部は、
 前記光学系および前記ステージの少なくとも一方を移動させて、前記第3方向に前記対物レンズの焦点を移動させるごとに、前記第1方向または前記第2方向に前記対物レンズの焦点を移動させる、
 (1)~(7)の何れか1つに記載の顕微鏡システム。
(9)
 前記焦点位置制御部は、
 前記光学系および前記ステージの少なくとも一方を移動させて、前記第1方向または前記第2方向に前記対物レンズの焦点を前記検体の一端側から他端側へ移動させるごとに、前記第3方向に前記対物レンズの焦点を移動させる、
 (1)~(7)の何れか1つに記載の顕微鏡システム。
(10)
 前記焦点位置制御部は、
 前記対物レンズと前記検体との間の光路長を変化させる可変部材を駆動制御することによって、前記対物レンズの焦点を移動させる、
 (1)~(7)の何れか1つに記載の顕微鏡システム。
(11)
 撮像部と、第1方向に平行なライン照明を照射する照射部と、検体を支持し、前記第1方向と垂直な第2方向に移動可能なステージと、を備えた測定部を制御するコンピュータが、
 前記ライン照明を照射されることにより検体から発せられた光の像の位相差を取得するステップと、
 対物レンズを含む光学系またはステージを振動させて、前記第1方向および前記第2方向それぞれに鉛直な第3方向に前記対物レンズの焦点を移動させるステップと、
 前記位相差に基づいて、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出するステップと、
 を含む撮像方法。
(12)
 測定部と、前記測定部の動作の制御に使われるソフトウェアと、を含んで構成される撮像装置であって、
 前記ソフトウェアは撮像装置に搭載されており、
 前記測定部は、
 撮像部と、第1方向に平行なライン照明を照射する照射部と、検体を支持し、前記第1方向と垂直な第2方向に移動可能なステージと、を備え
 前記ソフトウェアは、
 前記ライン照明を照射されることにより検体から発せられた光の像の位相差を取得し、
 対物レンズを含む光学系またはステージを振動させて、前記第1方向および前記第2方向それぞれに鉛直な第3方向に前記対物レンズの焦点を移動させ、
 前記位相差に基づいて、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出する、
 撮像装置。
The present technology can also have the following configurations.
(1)
Imaging unit and
An irradiation unit that irradiates line illumination parallel to the first direction,
A stage that supports the sample and can move in the second direction perpendicular to the first direction,
A phase difference acquisition unit that acquires the phase difference of an image of light emitted from a sample by being irradiated with the line illumination, and a phase difference acquisition unit.
A focal position control unit that vibrates an optical system or stage including an objective lens to move the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction.
A calculation unit that calculates at least one of the vibration amplitude and the imaging condition by the imaging unit based on the phase difference.
Microscope system with.
(2)
The calculation unit
The line center of gravity distribution is calculated based on the phase difference, and at least one of the vibration amplitude and the imaging condition by the imaging unit is calculated.
The microscope system according to (1).
(3)
The phase difference acquisition unit
The microscope system according to (1) or (2), wherein a pupil-divided image of an image of light emitted from the sample is acquired as the phase difference.
(4)
The calculation unit
Based on the phase difference, the imaging condition including the number of samplings in the third direction is calculated.
The microscope system according to (2).
(5)
The calculation unit
The number obtained by adding 1 to the number of regions in the image having peaks divided by the line center of gravity distribution and having different distances from the line center of gravity distribution is derived as the sampling number.
The microscope system according to (4).
(6)
The calculation unit
The amplitude is calculated for each divided region based on the phase difference for each divided region obtained by dividing the pupil division image of the image of light emitted from the sample into the number of samples along the longitudinal direction.
The microscope system according to (4).
(7)
The focal position control unit is
At least one of the optical system including the objective lens and the stage is moved to move the focal point of the objective lens in the first direction or the second direction.
The calculation unit
Based on the phase difference, the imaging condition including the amount of movement of the focal point in the first direction or the second direction is calculated.
The microscope system according to any one of (1) to (6).
(8)
The focal position control unit is
Each time the optical system and at least one of the stages are moved to move the focal point of the objective lens in the third direction, the focal point of the objective lens is moved in the first direction or the second direction.
The microscope system according to any one of (1) to (7).
(9)
The focal position control unit is
Each time the optical system and at least one of the stages are moved to move the focal point of the objective lens in the first direction or the second direction from one end side to the other end side of the sample, the third direction is obtained. To move the focal point of the objective lens,
The microscope system according to any one of (1) to (7).
(10)
The focal position control unit is
The focal point of the objective lens is moved by driving and controlling a variable member that changes the optical path length between the objective lens and the sample.
The microscope system according to any one of (1) to (7).
(11)
A computer that controls an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a stage that supports a sample and can move in a second direction perpendicular to the first direction. but,
The step of acquiring the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination, and
A step of vibrating an optical system or a stage including an objective lens to move the focal point of the objective lens in a third direction vertical to each of the first direction and the second direction.
A step of calculating at least one of the amplitude of the vibration and the imaging condition by the imaging unit based on the phase difference.
Imaging method including.
(12)
An imaging device including a measuring unit and software used to control the operation of the measuring unit.
The software is installed in the imaging device and
The measuring unit
The software includes an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a stage that supports a sample and can move in a second direction perpendicular to the first direction.
The phase difference of the image of the light emitted from the sample by being irradiated with the line illumination is acquired, and the phase difference is obtained.
The optical system or stage including the objective lens is vibrated to move the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction.
Based on the phase difference, at least one of the vibration amplitude and the imaging condition by the imaging unit is calculated.
Imaging device.
 1 顕微鏡システム
 12 撮像装置
 18 照射部
 22 対物レンズ
 24 測定対象領域
 34 撮像部
 42 瞳分割像撮像部
 44 第1駆動部
 46 第2駆動部
 48 第3駆動部
 50 可変部材
 60B 位相差取得部
 60C 算出部
 60D 第1算出部
 60E 第2算出部
 60G 焦点位置制御部
 60H 撮像画像取得部
 T 検体
1 Microscope system 12 Imaging device 18 Irradiation unit 22 Objective lens 24 Measurement target area 34 Imaging unit 42 Eye split image imaging unit 44 1st drive unit 46 2nd drive unit 48 3rd drive unit 50 Variable member 60B Phase difference acquisition unit 60C Calculation Unit 60D 1st calculation unit 60E 2nd calculation unit 60G Focus position control unit 60H Captured image acquisition unit T Specimen

Claims (12)

  1.  撮像部と、
     第1方向に平行なライン照明を照射する照射部と、
     検体を支持し、前記第1方向と垂直な第2方向に移動可能なステージと、
     前記ライン照明を照射されることにより検体から発せられた光の像の位相差を取得する位相差取得部と、
     対物レンズを含む光学系またはステージを振動させて、前記第1方向および前記第2方向それぞれに鉛直な第3方向に前記対物レンズの焦点を移動させる焦点位置制御部と、
     前記位相差に基づいて、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出する算出部と、
     を備える顕微鏡システム。
    Imaging unit and
    An irradiation unit that irradiates line illumination parallel to the first direction,
    A stage that supports the sample and can move in the second direction perpendicular to the first direction,
    A phase difference acquisition unit that acquires the phase difference of an image of light emitted from a sample by being irradiated with the line illumination, and a phase difference acquisition unit.
    A focal position control unit that vibrates an optical system or stage including an objective lens to move the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction.
    A calculation unit that calculates at least one of the vibration amplitude and the imaging condition by the imaging unit based on the phase difference.
    Microscope system with.
  2.  前記算出部は、
     前記位相差に基づいてライン重心分布を算出し、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出する、
     請求項1に記載の顕微鏡システム。
    The calculation unit
    The line center of gravity distribution is calculated based on the phase difference, and at least one of the vibration amplitude and the imaging condition by the imaging unit is calculated.
    The microscope system according to claim 1.
  3.  前記位相差取得部は、
     前記検体から発せられた光の像の瞳分割像を前記位相差として取得する、請求項1または請求項2に記載の顕微鏡システム。
    The phase difference acquisition unit
    The microscope system according to claim 1 or 2, wherein a pupil-divided image of an image of light emitted from the sample is acquired as the phase difference.
  4.  前記算出部は、
     前記位相差に基づいて、前記第3方向のサンプリング数を含む前記撮像条件を算出する、
     請求項2に記載の顕微鏡システム。
    The calculation unit
    Based on the phase difference, the imaging condition including the number of samplings in the third direction is calculated.
    The microscope system according to claim 2.
  5.  前記算出部は、
     前記像における、前記ライン重心分布によって分割され且つ前記ライン重心分布からの距離が互いに異なるピークを有する領域の数に、1を加算した数を、前記サンプリング数として導出する、
     請求項4に記載の顕微鏡システム。
    The calculation unit
    The number obtained by adding 1 to the number of regions in the image having peaks divided by the line center of gravity distribution and having different distances from the line center of gravity distribution is derived as the sampling number.
    The microscope system according to claim 4.
  6.  前記算出部は、
     前記検体から発せられた光の像の瞳分割像を長手方向に沿って前記サンプリング数に分割した分割領域ごとの前記位相差に基づいて、前記分割領域ごとに前記振幅を算出する、
     請求項4に記載の顕微鏡システム。
    The calculation unit
    The amplitude is calculated for each divided region based on the phase difference for each divided region obtained by dividing the pupil division image of the image of light emitted from the sample into the number of samples along the longitudinal direction.
    The microscope system according to claim 4.
  7.  前記焦点位置制御部は、
     前記対物レンズを含む光学系およびステージの少なくとも一方を移動させて、前記第1方向または前記第2方向に前記対物レンズの焦点を移動させ、
     前記算出部は、
     前記位相差に基づいて、前記第1方向または前記第2方向への前記焦点の移動量を含む、前記撮像条件を算出する、
     請求項1~請求項6の何れか1項に記載の顕微鏡システム。
    The focal position control unit is
    At least one of the optical system including the objective lens and the stage is moved to move the focal point of the objective lens in the first direction or the second direction.
    The calculation unit
    Based on the phase difference, the imaging condition including the amount of movement of the focal point in the first direction or the second direction is calculated.
    The microscope system according to any one of claims 1 to 6.
  8.  前記焦点位置制御部は、
     前記光学系および前記ステージの少なくとも一方を移動させて、前記第3方向に前記対物レンズの焦点を移動させるごとに、前記第1方向または前記第2方向に前記対物レンズの焦点を移動させる、
     請求項1~請求項7の何れか1項に記載の顕微鏡システム。
    The focal position control unit is
    Each time the optical system and at least one of the stages are moved to move the focal point of the objective lens in the third direction, the focal point of the objective lens is moved in the first direction or the second direction.
    The microscope system according to any one of claims 1 to 7.
  9.  前記焦点位置制御部は、
     前記光学系および前記ステージの少なくとも一方を移動させて、前記第1方向または前記第2方向に前記対物レンズの焦点を前記検体の一端側から他端側へ移動させるごとに、前記第3方向に前記対物レンズの焦点を移動させる、
     請求項1~請求項7の何れか1項に記載の顕微鏡システム。
    The focal position control unit is
    Each time the optical system and at least one of the stages are moved to move the focal point of the objective lens in the first direction or the second direction from one end side to the other end side of the sample, the third direction is obtained. To move the focal point of the objective lens,
    The microscope system according to any one of claims 1 to 7.
  10.  前記焦点位置制御部は、
     前記対物レンズと前記検体との間の光路長を変化させる可変部材を駆動制御することによって、前記対物レンズの焦点を移動させる、
     請求項1~請求項7の何れか1項に記載の顕微鏡システム。
    The focal position control unit is
    The focal point of the objective lens is moved by driving and controlling a variable member that changes the optical path length between the objective lens and the sample.
    The microscope system according to any one of claims 1 to 7.
  11.  撮像部と、第1方向に平行なライン照明を照射する照射部と、検体を支持し、前記第1方向と垂直な第2方向に移動可能なステージと、を備えた測定部を制御するコンピュータが、
     前記ライン照明を照射されることにより検体から発せられた光の像の位相差を取得するステップと、
     対物レンズを含む光学系またはステージを振動させて、前記第1方向および前記第2方向それぞれに鉛直な第3方向に前記対物レンズの焦点を移動させるステップと、
     前記位相差に基づいて、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出するステップと、
     を含む撮像方法。
    A computer that controls an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a stage that supports a sample and can move in a second direction perpendicular to the first direction. but,
    The step of acquiring the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination, and
    A step of vibrating an optical system or a stage including an objective lens to move the focal point of the objective lens in a third direction vertical to each of the first direction and the second direction.
    A step of calculating at least one of the amplitude of the vibration and the imaging condition by the imaging unit based on the phase difference.
    Imaging method including.
  12.  測定部と、前記測定部の動作の制御に使われるソフトウェアと、を含んで構成される撮像装置であって、
     前記ソフトウェアは撮像装置に搭載されており、
     前記測定部は、
     撮像部と、第1方向に平行なライン照明を照射する照射部と、検体を支持し、前記第1方向と垂直な第2方向に移動可能なステージと、を備え
     前記ソフトウェアは、
     前記ライン照明を照射されることにより検体から発せられた光の像の位相差を取得し、
     対物レンズを含む光学系またはステージを振動させて、前記第1方向および前記第2方向それぞれに鉛直な第3方向に前記対物レンズの焦点を移動させ、
     前記位相差に基づいて、前記振動の振幅、および、前記撮像部による撮像条件、の少なくとも一方を算出する、
     撮像装置。
    An imaging device including a measuring unit and software used to control the operation of the measuring unit.
    The software is installed in the imaging device and
    The measuring unit
    The software includes an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a stage that supports a sample and can move in a second direction perpendicular to the first direction.
    The phase difference of the image of the light emitted from the sample by being irradiated with the line illumination is acquired, and the phase difference is obtained.
    The optical system or stage including the objective lens is vibrated to move the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction.
    Based on the phase difference, at least one of the vibration amplitude and the imaging condition by the imaging unit is calculated.
    Imaging device.
PCT/JP2021/010981 2020-03-27 2021-03-18 Microscope system, imaging method, and imaging device WO2021193325A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-057778 2020-03-27
JP2020057778 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021193325A1 true WO2021193325A1 (en) 2021-09-30

Family

ID=77891710

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/010981 WO2021193325A1 (en) 2020-03-27 2021-03-18 Microscope system, imaging method, and imaging device

Country Status (1)

Country Link
WO (1) WO2021193325A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002323316A (en) * 2001-04-25 2002-11-08 Nikon Corp Focal point position detector
JP2006011446A (en) * 2004-06-24 2006-01-12 Fujifilm Electronic Imaging Ltd Method and apparatus for forming multiple focus stack image
US20070069106A1 (en) * 2005-06-22 2007-03-29 Tripath Imaging, Inc. Apparatus and Method for Rapid Microscope Image Focusing
JP2010191000A (en) * 2009-02-16 2010-09-02 Nikon Corp Microscope and image acquiring method for microscope
JP2016051167A (en) * 2014-08-29 2016-04-11 キヤノン株式会社 Image acquisition device and control method therefor
WO2019230878A1 (en) * 2018-05-30 2019-12-05 ソニー株式会社 Fluorescence observation device and fluorescence observation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002323316A (en) * 2001-04-25 2002-11-08 Nikon Corp Focal point position detector
JP2006011446A (en) * 2004-06-24 2006-01-12 Fujifilm Electronic Imaging Ltd Method and apparatus for forming multiple focus stack image
US20070069106A1 (en) * 2005-06-22 2007-03-29 Tripath Imaging, Inc. Apparatus and Method for Rapid Microscope Image Focusing
JP2010191000A (en) * 2009-02-16 2010-09-02 Nikon Corp Microscope and image acquiring method for microscope
JP2016051167A (en) * 2014-08-29 2016-04-11 キヤノン株式会社 Image acquisition device and control method therefor
WO2019230878A1 (en) * 2018-05-30 2019-12-05 ソニー株式会社 Fluorescence observation device and fluorescence observation method

Similar Documents

Publication Publication Date Title
US9832365B2 (en) Autofocus based on differential measurements
JP3090679B2 (en) Method and apparatus for measuring a plurality of optical properties of a biological specimen
US8027548B2 (en) Microscope system
US10365468B2 (en) Autofocus imaging
US7232980B2 (en) Microscope system
JP5577885B2 (en) Microscope and focusing method
US8760756B2 (en) Automated scanning cytometry using chromatic aberration for multiplanar image acquisition
JP6408543B2 (en) Imaging a light field using a scanning optical unit
JP6662529B2 (en) System and method for continuous asynchronous autofocus of optics
CN112703440B (en) Microscope system
JPS6394156A (en) Method and instrument for analyzing cell in fluid
JPH08320285A (en) Particle analyzing device
WO2021167044A1 (en) Microscope system, imaging method, and imaging device
EP3660572A1 (en) Sample observation device and sample observation method
WO2021193325A1 (en) Microscope system, imaging method, and imaging device
WO2016024429A1 (en) Fine particle detection device
WO2021193177A1 (en) Microscope system, imaging method, and imaging device
US20220146805A1 (en) Microscope system, focus adjustment program, and focus adjustment system
JP2001304844A (en) Method of setting position of object to be measured in measurement of layer thickness by x ray fluorescence
JPH08145872A (en) Method and device for analyzing flow type particle picture
EP2390706A1 (en) Autofocus imaging.
JP7427291B2 (en) imaging flow cytometer
JP2004177732A (en) Optical measuring device
JP2003042720A (en) Height measuring apparatus
JPH0552897B2 (en)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21775192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21775192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP