WO2021193325A1 - Système de microscope, procédé d'imagerie et dispositif d'imagerie - Google Patents

Système de microscope, procédé d'imagerie et dispositif d'imagerie Download PDF

Info

Publication number
WO2021193325A1
WO2021193325A1 PCT/JP2021/010981 JP2021010981W WO2021193325A1 WO 2021193325 A1 WO2021193325 A1 WO 2021193325A1 JP 2021010981 W JP2021010981 W JP 2021010981W WO 2021193325 A1 WO2021193325 A1 WO 2021193325A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
image
imaging
phase difference
objective lens
Prior art date
Application number
PCT/JP2021/010981
Other languages
English (en)
Japanese (ja)
Inventor
和田 成司
植田 充紀
健 松井
寛和 辰田
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021193325A1 publication Critical patent/WO2021193325A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene

Definitions

  • the present disclosure relates to a microscope system, an imaging method, and an imaging device.
  • a technique for obtaining an image of a sample by irradiating the sample with light and receiving the light emitted from the sample is disclosed.
  • a technique for obtaining a captured image focused on a sample by combining with an autofocus system is disclosed.
  • a technique of identifying an optimum focal length from a captured image captured by an imager arranged at an angle, a technique of obtaining a Z-stack image for prefocus, and the like are disclosed.
  • the sample to be measured may be arranged at an angle with respect to an orthogonal plane orthogonal to the optical axis.
  • the present disclosure proposes a microscope system, an imaging method, and an imaging device capable of obtaining a highly accurate image captured at each position of the sample.
  • one form of the microscope system supports an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a sample, and is perpendicular to the first direction.
  • the stage that can move in the second direction, the phase difference acquisition unit that acquires the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination, and the optical system or stage including the objective lens are vibrated.
  • a focal position control unit that moves the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction, an amplitude of the vibration based on the phase difference, and the imaging.
  • a calculation unit for calculating at least one of the imaging conditions by the unit is provided.
  • FIG. 1 is a schematic view showing an example of the microscope system 1 of the present embodiment.
  • the microscope system 1 is a system that irradiates the sample T with line illumination LA and receives the light emitted from the sample T. Details of the line illumination LA and the sample T will be described later.
  • the microscope system 1 includes an imaging device 12.
  • the image pickup device 12 is communicably connected to the server device 10 via, for example, a wireless communication network such as network N or a wired communication network.
  • the server device 10 may be a computer.
  • the direction in which the objective lens 22 and the sample T, which will be described later, are approaching each other and away from each other will be referred to as a Z-axis direction.
  • the Z-axis direction will be described as being coincident with the thickness direction of the sample T.
  • the case where the Z-axis direction and the optical axis A2 of the objective lens 22 are parallel will be described.
  • the stage 26 described later is assumed to be a two-dimensional plane represented by two axes (X-axis direction and Y-axis direction) orthogonal to the Z-axis direction.
  • a plane parallel to the two-dimensional plane of the stage 26 may be referred to as an XY plane. Details of each of these parts will be described later.
  • the imaging device 12 includes a measuring unit 14 and a control device 16.
  • the measuring unit 14 and the control device 16 are connected so as to be able to exchange data or signals.
  • the measuring unit 14 has an optical mechanism for measuring the light emitted from the measurement target region 24.
  • the measuring unit 14 is applied to, for example, an optical microscope.
  • the measurement unit 14 includes an irradiation unit 18, a split mirror 20, an objective lens 22, a stage 26, a half mirror 28, an imaging optical unit 30, a focus detection unit 36, a first drive unit 44, and a second.
  • a drive unit 46 is provided.
  • the irradiation unit 18 irradiates the line illumination LA parallel to the first direction.
  • Line illumination LA is light having a long line shape in the first direction.
  • the line illumination LA means that the length of the luminous flux in the first direction in the two-dimensional plane orthogonal to the optical axis is several times (for example, 100 times or more) with respect to the direction orthogonal to the first direction. ) It is a light of a length longer than that.
  • the first direction which is the longitudinal direction of the line illumination LA, coincides with the X-axis direction in FIG. 1 will be described as an example.
  • the irradiation unit 18 includes a light source unit 18A and an imaging optical system 18D.
  • the light source unit 18A includes a light source 18B and a collimator lens 18C.
  • the light source 18B is a light source that emits a line-shaped line illumination LA.
  • the light source 18B irradiates the line illumination LA by condensing light in only one direction using a cylindrical lens (not shown) arranged in the optical path.
  • the light source 18B may irradiate the sample T with the line illumination LA by irradiating the sample T with light through a slit long in the X-axis direction.
  • the line illumination LA emitted from the light source 18B reaches the split mirror 20 via the imaging optical system 18D after being made into substantially parallel light by the collimator lens 18C.
  • the line shape indicates the shape of the illumination light that the line illumination LA emitted from the light source 18B irradiates the sample T. Specifically, the line shape indicates the shape of the cross section of the line illumination LA irradiated from the light source 18B, which is orthogonal to the optical axis A1.
  • the optical axis A1 indicates an optical axis from the light source 18B to the split mirror 20. In other words, the optical axis A1 is the optical axis of the collimator lens 18C and the imaging optical system 18D.
  • the light source 18B may be a light source 18B that selectively irradiates light in a wavelength region in which the sample T fluoresces. Further, the irradiation unit 18 may be provided with a filter that selectively transmits light in the wavelength region. In the present embodiment, the mode in which the light source 18B irradiates the line illumination LA in the wavelength region where the sample T fluoresces will be described as an example.
  • the split mirror 20 reflects the line illumination LA and transmits light in a wavelength region other than the line illumination LA.
  • the split mirror 20 selects a half mirror or a dichroic mirror according to the measurement target. In the present embodiment, the light emitted from the measurement target region 24 is transmitted.
  • the line illumination LA is reflected by the split mirror 20 and reaches the objective lens 22.
  • the objective lens 22 is a focus lens that focuses the line illumination LA on the measurement target area 24.
  • the objective lens 22 is a lens for irradiating the measurement target area 24 with the line illumination LA by condensing the line illumination LA on the measurement target area 24.
  • the objective lens 22 is provided with a second drive unit 46.
  • the second drive unit 46 moves the objective lens 22 in the Z-axis direction.
  • the Z-axis direction corresponds to the third direction and coincides with the optical axis direction.
  • the first drive unit 44 is provided on the stage 26 on which the measurement target region 24 is placed.
  • the stage 26 supports the sample T and can move in the second direction (Y-axis direction) perpendicular to the first direction.
  • the first drive unit 44 moves the stage 26 in the Z-axis direction.
  • the measurement target area 24 placed on the stage 26 moves in the direction toward or away from the objective lens 22.
  • the focus of the objective lens 22 is adjusted (details will be described later).
  • the first drive unit 44 moves the stage 26 in at least one of the Y-axis direction and the X-axis direction. With the movement of the stage 26, the measurement target area 24 placed on the stage 26 is moved relative to the objective lens 22 in the Y-axis direction or the X-axis direction.
  • the Y-axis direction and the X-axis direction are directions orthogonal to the Z-axis direction.
  • the Y-axis direction and the X-axis direction are directions orthogonal to each other.
  • the measuring unit 14 may be configured to include at least one of the first driving unit 44 and the second driving unit 46, and is not limited to the configuration including both of them.
  • the Z-axis direction will be described as being coincident with the thickness direction of the measurement target region 24. That is, the Z-axis direction will be described as being coincident with the thickness direction of the sample T supported on the stage 26. Further, in the present embodiment, the case where the Z-axis direction and the optical axis A2 of the objective lens 22 are parallel will be described. Further, it is assumed that the stage 26 is a two-dimensional plane represented by two axes (X-axis direction and Y-axis direction) orthogonal to the Z-axis direction. A plane parallel to the two-dimensional plane of the stage 26 may be referred to as an XY plane.
  • the longitudinal direction of the line illumination LA when emitted from the light source 18B coincides with the X-axis direction.
  • the longitudinal direction of the line illumination LA may not match the X-axis direction.
  • the longitudinal direction of the line illumination LA when emitted from the light source 18B may be a direction that coincides with the Y-axis direction.
  • the measurement target area 24 includes the sample T.
  • the measurement target region 24 is a region between a pair of glass members and is a region including a sample T.
  • the glass member is, for example, a slide glass.
  • the glass member is sometimes referred to as a cover glass.
  • the glass member may be any member as long as the sample T can be placed on the glass member, and the glass member is not limited to the member made of glass.
  • the glass member may be a member that transmits light emitted from the line illumination LA and the sample T.
  • Specimen T is a measurement target in the microscope system 1. That is, the sample T is an object for which an image captured by the microscope system 1 is obtained. In this embodiment, the sample T will be described as an example in which fluorescence is emitted by irradiation with line illumination LA.
  • Specimen T is, for example, microorganisms, cells, liposomes, erythrocytes, leukocytes, platelets, vascular endothelial cells in blood, microcell fragments of epithelial tissue, and histopathological tissue sections of various organs.
  • the sample T may be an object such as a cell labeled with a fluorescent dye that fluoresces when irradiated with line illumination LA.
  • the sample T is not limited to a substance that fluoresces when irradiated with line illumination LA.
  • the sample T may be one that emits light in a wavelength region other than fluorescence by irradiation with line illumination LA, or may be one that scatters and reflects the illumination light.
  • the fluorescence emitted by the sample T due to the irradiation of the line illumination LA may be simply referred to as light.
  • the sample T in a state of being enclosed by the encapsulant may be placed in the measurement target area 24.
  • the encapsulating material a known material that transmits each of the line illumination LA incident on the measurement target region 24 and the light emitted by the sample T may be used.
  • the encapsulant may be either liquid or solid.
  • the measurement target region 24 may have a configuration in which the sample T is placed on the glass member, and is not limited to the configuration in which the sample T is placed between the pair of glass members. Further, the sample T may be placed in the measurement target area 24 in a state of being sealed by the sealing material.
  • the light emitted from the sample T by the line illumination LA passes through the objective lens 22 and the split mirror 20, and in the present embodiment, the dichroic mirror and reaches the half mirror 28.
  • the light emitted from the sample T is, for example, the fluorescence emitted by the sample T by irradiation with the line illumination LA. Fluorescence includes scattered fluorescent components.
  • the half mirror 28 distributes a part of the light emitted from the sample T to the imaging optical unit 30, and distributes the rest to the focus detection unit 36.
  • the distribution ratio of light to the imaging optical unit 30 and the focus detection unit 36 by the half mirror 28 may be the same ratio (for example, 50% or 50%) or may be different ratios. Therefore, instead of the half mirror 28, a dichroic mirror or a polarizing mirror may be used.
  • the light transmitted through the half mirror 28 reaches the imaging optical unit 30.
  • the light reflected by the half mirror 28 reaches the focus detection unit 36.
  • the line illumination LA created by the irradiation unit 18 and the measurement target area 24 are optically conjugated. Further, it is assumed that the line illumination LA, the measurement target area 24, the image pickup unit 34 of the image pickup optical unit 30, and the pupil division image image pickup unit 42 of the focus detection unit 36 are optically conjugated.
  • the imaging optical unit 30 includes an imaging lens 32 and an imaging unit 34.
  • the light transmitted through the half mirror 28 is focused on the imaging unit 34 by the imaging lens 32.
  • the imaging unit 34 receives the light emitted from the sample T and outputs the captured image of the received light to the control device 16.
  • the captured image is used for analysis of the type of sample T and the like.
  • the imaging unit 34 may be, for example, a CMOS (Complementary Metal-Oxide Semiconductor) image sensor, a CCD (Charge Coupled Device) image sensor, or the like.
  • the imaging unit 34 may include, for example, a plurality of light receiving units 33. In that case, each light receiving unit 33 may have, for example, a configuration in which photodiodes are arranged two-dimensionally or one-dimensionally.
  • the focus detection unit 36 is an optical unit for obtaining the phase difference of the image of the light emitted from the sample T irradiated with the line illumination LA.
  • the focus detection unit 36 is an optical unit for obtaining the phase difference of an image in which the pupil is divided into two by using two separator lenses will be described as an example.
  • the focus detection unit 36 includes a field lens 38, an aperture mask 39, a separator lens 40 including a separator lens 40A and a separator lens 40B, and a pupil split image imaging unit 42.
  • the separator lens 40 includes a separator lens 40A and a separator lens 40B.
  • the aperture mask 39 has a pair of openings 39A and 39B at a target position with the optical axis of the field lens 38 as a boundary. The size of these pair of openings 39A and 39B is adjusted so that the depth of field of the separator lens 40A and the separator lens 40B is wider than the depth of field of the objective lens 22.
  • the diaphragm mask 39 divides the light incident from the field lens 38 into two luminous fluxes by a pair of openings 39A and 39B.
  • the separator lens 40A and the separator lens 40B collect the light flux transmitted through each of the openings 39A and 39B of the aperture mask 39 on the pupil split image imaging unit 42, respectively. Therefore, the pupil-divided image imaging unit 42 receives the two divided light fluxes.
  • the focus detection unit 36 may not be provided with the aperture mask 39.
  • the light that reaches the separator lens 40 via the field lens 38 is divided into two light fluxes by the separator lens 40A and the separator lens 40B, and is focused on the pupil-divided image imaging unit 42.
  • the pupil split image imaging unit 42 includes a plurality of light receiving units 41.
  • the pupil division image imaging unit 42 has a configuration in which a plurality of light receiving units 41 are two-dimensionally arranged along the light receiving surface.
  • the light receiving surface of the light receiving unit 41 is a two-dimensional plane orthogonal to the optical axis of the light incident on the pupil split image imaging unit 42 via the field lens 38, the aperture mask 39, and the separator lens 40.
  • the pupil division image imaging unit 42 is, for example, a CMOS image sensor, a CCD image sensor, or the like.
  • the pupil-divided image imaging unit 42 receives two light fluxes divided by the two pupils (separator lens 40A and separator lens 40B). By receiving two light fluxes, the pupil division image imaging unit 42 captures an image composed of a set of light flux images and outputs the image to the control device 16. There is a phase difference between the pair of images formed through the divided pupil (separator lens 40A) and the pupil (separator lens 40B) because the optical paths are different.
  • this set of images will be referred to as a pupil split image.
  • FIG. 2 is a schematic view showing an example of the pupil division image 70 acquired by the pupil division image imaging unit 42.
  • the pupil-split image 70 includes a set of image 72A and a pupil-split image 72, which is an image 72B.
  • the pupil-divided image 70 is an image corresponding to the position and brightness of the light received by each of the plurality of light-receiving units 41 provided in the pupil-divided image imaging unit 42.
  • the brightness of the light received by the light receiving unit may be described as a light intensity value.
  • the light receiving unit 41 is provided for each one or a plurality of pixels.
  • the pupil division image 70 is an image in which the light intensity value is defined for each pixel corresponding to each of the plurality of light receiving units 41.
  • the light intensity value corresponds to the pixel value.
  • the image 72A and the image 72B included in the pupil division image 70 are light receiving regions, and are regions having a larger light intensity value than other regions. Further, as described above, the irradiation unit 18 irradiates the sample T with the line illumination LA. Therefore, the light emitted from the sample T irradiated with the line illumination LA becomes line-shaped light. Therefore, the images 72A and 72B constituting the pupil division image 72 are long line-shaped images in a predetermined direction. This predetermined direction is a direction that optically corresponds to the X-axis direction, which is the longitudinal direction of the line illumination LA.
  • the vertical axis direction (YA axis direction) of the pupil split image 70 shown in FIG. 2 optically corresponds to the Y axis direction in the measurement target region 24.
  • the horizontal axis direction (XA axis direction) of the pupil division image 70 shown in FIG. 2 optically corresponds to the X axis direction in the measurement target region 24.
  • the X-axis direction is the longitudinal direction of the line illumination LA.
  • the depth direction (ZA-axis direction) of the pupil-divided image 70 shown in FIG. 2 optically corresponds to the Z-axis direction, which is the thickness direction of the measurement target region 24.
  • the focus detection unit 36 outputs the pupil division image 70 to the control device 16.
  • the focus detection unit 36 may be an optical unit for obtaining the movement of the pupil division image 72 (image 72A, image 72B) having a phase difference, and the pupil division image 72 (image 72A, image 72B) having a phase difference may be used. ) Is not limited to the bicular pupil split image.
  • the focus detection unit 36 may be, for example, an optical unit that obtains a pupil division image of three or more eyes that divides the light emitted from the sample T into three or more light fluxes and receives the light.
  • the measuring unit 14 drives the stage 26 on which the sample T is placed by the first driving unit 44, and moves the measurement target region 24 relative to the line illumination LA in the Y-axis direction while moving the measurement target region 24 relative to the line illumination LA.
  • Specimen T is irradiated with line illumination LA. That is, in the present embodiment, the Y-axis direction is the scanning direction of the measurement target region 24.
  • the scanning method of the line illumination LA is not limited.
  • the scanning method is a method of scanning along a direction (Y-axis direction) orthogonal to the longitudinal direction (X-axis direction) of the line illumination LA, and at least a part of the configuration other than the measurement target area 24 in the measurement unit 14 is a measurement target area.
  • a deflection mirror may be arranged between the split mirror 20 and the objective lens 22, and the line illumination LA may be scanned by the deflection mirror.
  • control device 16 will be described.
  • the control device 16 is an example of an information processing device.
  • the control device 16 controls at least one of the vibration amplitude and the imaging condition by the imaging unit 34 based on the phase difference (details will be described later).
  • the vibration means the vibration of the optical system including the objective lens 22 or the stage 26 in the measuring unit 14.
  • the vibration means, in detail, wobbling due to the vibration of the optical system or the stage 26.
  • Wobbling means moving the focal point of the objective lens 22 along the optical axis A2 direction (third direction).
  • the control device 16 acquires an captured image by performing imaging while wobbling.
  • control device 16 and each of the light source 18B, the image pickup unit 34, the focus detection unit 36, the first drive unit 44, and the second drive unit 46 are connected so as to be able to exchange data or signals.
  • FIG. 3 is a diagram showing an example of the functional configuration of the control device 16. Note that FIG. 3 also shows a light source 18B, a pupil split image imaging unit 42, an imaging unit 34, a first drive unit 44, and a second drive unit 46 for the sake of explanation.
  • the control device 16 includes a control unit 60, a storage unit 62, and a communication unit 64.
  • the control unit 60, the storage unit 62, and the communication unit 64 are connected to each other so that data or signals can be exchanged.
  • the storage unit 62 is a storage medium for storing various types of data.
  • the storage unit 62 is, for example, a hard disk drive or an external memory.
  • the communication unit 64 communicates with an external device such as the server device 10 via the network N or the like.
  • the control unit 60 includes a light source control unit 60A, a phase difference acquisition unit 60B, a calculation unit 60C, a focus position control unit 60G, an image capture image acquisition unit 60H, and an output control unit 60I.
  • the calculation unit 60C includes a first calculation unit 60D and a second calculation unit 60E.
  • the program may be executed by a processing device such as a CPU (Central Processing Unit), that is, it may be realized by software, it may be realized by hardware such as an IC (Integrated Circuit), or it may be realized by software and It may be realized by using hardware together.
  • a processing device such as a CPU (Central Processing Unit)
  • CPU Central Processing Unit
  • IC Integrated Circuit
  • the light source control unit 60A controls the light source 18B so as to irradiate the line illumination LA.
  • the line illumination LA is irradiated from the light source 18B under the control of the light source control unit 60A.
  • the phase difference acquisition unit 60B acquires the phase difference of the light images (image 72A, image 72B) emitted from the sample T irradiated with the line illumination LA. Specifically, the phase difference acquisition unit 60B acquires the pupil division image 70 from the pupil division image imaging unit 42, so that the pupil division image 72 of the light images (images 72A and 72B) included in the pupil division image 70 is obtained. Is acquired as the phase difference.
  • the focus position control unit 60G drives at least one of the optical systems included in the measurement unit 14.
  • the focal position control unit 60G drives at least one of the optical systems provided in the measuring unit 14 by driving and controlling at least one of the driving units for driving each of the optical systems.
  • the focal position control unit 60G vibrates (that is, wobbling) the optical system including the objective lens 22 or the stage 26 to move the focal point of the objective lens 22 in the Z-axis direction.
  • a mode in which the focal position control unit 60G moves the focus of the objective lens 22 in the Z-axis direction by controlling the second drive unit 46 will be described as an example.
  • the focal position control unit 60G moves at least one of the optical system and the stage 26 to move the focal point of the objective lens 22 in the Y-axis direction or the X-axis direction.
  • the focal position control unit 60G moves the stage 26 in the Y-axis direction or the X-axis direction by controlling the first drive unit 44. A mode in which the focal point of the objective lens 22 is moved in the Y-axis direction or the X-axis direction by this control will be described as an example.
  • the sample T may be arranged at an angle with respect to the XY plane, which is an orthogonal plane orthogonal to the optical axis A2.
  • FIG. 4 is a schematic view showing an example of a case where the sample T is arranged at an angle with respect to the XY plane.
  • the sample T may be arranged at an angle with respect to the XY plane in the measurement target region 24.
  • the thickness of the sample T may not be constant.
  • the distance between each position specified by the X-axis direction and the Y-axis direction of the sample T and the objective lens 22 in the optical axis A2 direction may not be the same for each position but may include different distances. be.
  • the position specified by the X-axis direction and the Y-axis direction is a position on the XY plane specified by the two-dimensional coordinates in the X-axis direction and the Y-axis direction.
  • control unit 60 of the present embodiment includes a calculation unit 60C.
  • the calculation unit 60C calculates at least one of the amplitude of the vibration of the optical system or the stage 26 and the image pickup condition by the image pickup unit 34 based on the phase difference acquired by the phase difference acquisition unit 60B.
  • the imaging condition by the imaging unit 34 includes the number of samplings in the Z-axis direction.
  • the number of samplings in the Z-axis direction means the number of images taken per cycle of vibration, which is wobbling.
  • the number of samples means the number of divisions when the measurement target area 24 is divided in the thickness direction (Z-axis direction) of the sample T included in the measurement target area 24 and imaged.
  • the number of samplings in the Z-axis direction may be referred to as the number of Z stacks. In the present embodiment, the number of samples may be referred to as the number of Z stacks.
  • Vibration amplitude means wobbling amplitude.
  • the amplitude of vibration corresponds to the Z stack spacing.
  • the amplitude may be referred to as a Z stack interval.
  • the imaging condition may include the amount of movement of the focal point in the X-axis direction which is the first direction or the Y-axis direction which is the second direction. This amount of movement corresponds to the amount of movement of the imaging range by the imaging unit 34 in the Y-axis direction or the X-axis direction each time.
  • the calculation unit 60C calculates the line center of gravity distribution based on the phase difference acquired by the phase difference acquisition unit 60B, the Z stack interval which is the amplitude of vibration, and the Z which is the imaging condition by the imaging unit 34. Calculate at least one of the number of stacks.
  • the line center of gravity distribution means the distribution of the center of gravity of the light intensity distribution in the line direction of each of the images 72A and 72B constituting the pupil division image 72.
  • FIG. 5 is an explanatory diagram of the line center of gravity distribution g.
  • the image 72A is shown as an example.
  • the line center of gravity distribution g is the distribution in the line direction of the center of gravity of the light distribution in the YA axis direction in the image 72A, which is a line-shaped light long in the XA axis direction.
  • the image 72A has a long line shape in the XA axis direction. Therefore, in the pupil division image 70, the line center of gravity distribution g is represented by a line along the XA axis direction, which is the longitudinal direction of the image 72A.
  • the calculation unit 60C may specify the line center of gravity distribution g of either one of the image 72A and the image 72B constituting the pupil division image 72.
  • the calculation unit 60C may preset which of the image 72A and the image 72B is to be used, and specify the line center of gravity distribution g of the preset image 72A or 72B.
  • FIG. 6 is a schematic view showing an example of the pupil split image 70. It is assumed that the sample T is arranged at an angle with respect to the XY plane. In this case, the light with respect to the line center of gravity distribution g (line center of gravity distribution ga, line center of gravity distribution gb) of each of the images 72A and 72B constituting the pupil division image 72 of the sample T (see FIG. 4) inclined with respect to the XY plane. As shown in FIG. 6, the intensity distribution shows different distributions in the YA axis direction along the XA axis direction.
  • the first calculation unit 60D of the calculation unit 60C calculates the number of Z stacks based on the line center of gravity distribution of the image 72A or the image 72B.
  • a mode in which the first calculation unit 60D calculates the number of Z stacks based on the light intensity distribution with respect to the line center of gravity distribution ga, which is the line center of gravity distribution g of the image 72A will be described as an example.
  • the first calculation unit 60D may calculate the number of Z stacks based on the light intensity distribution with respect to the line center of gravity distribution gb, which is the line center of gravity distribution g of the image 72B.
  • the first calculation unit 60D adds 1 to the number of regions in the light intensity distribution of the image 72A that are divided by the line center of gravity distribution g and have peaks that are different from each other in distance from the line center of gravity distribution g. , Calculated as the number of Z stacks.
  • 7A to 7C are explanatory views for calculating the number of Z stacks.
  • 7A to 7C show the pupil division image 70C to the pupil division image 70E, respectively.
  • the pupil-divided image 70C to the pupil-divided image 70E are examples of the pupil-divided image 70.
  • the image 72A constituting the pupil division image 72 is represented by a linear straight line intersecting the line center of gravity distribution g of the image 72A at one intersection I.
  • the light intensity distribution of the image 72A is divided into two regions EA and EB by the line center of gravity distribution g.
  • the peak P in the light intensity distribution means a point at which the distance from the line center of gravity distribution g at each position of the light intensity distribution in the XA axis direction changes from rising to falling or falling to rising in the YA axis direction.
  • the first calculation unit 60D is the number of regions (regions EA and EB) having different peaks P (peaks P1 and P2).
  • "3" which is the sum of "1” and "1”, is calculated as the number of Z stacks.
  • the image 72A constituting the pupil division image 72 is represented by a waveform that intersects the line center of gravity distribution g of the image 72A at three intersections I.
  • the light intensity distribution of the image 72A is divided into four regions EC, region ED, region EE, and region EF by the line center of gravity distribution g.
  • the peak P3 of the region EC and the peak P5 of the region EF have the same distance from the line center of gravity distribution g.
  • the peak P3 of the region EC, the peak P4 of the region ED, and the peak P6 of the region EF are different in distance from the line center of gravity distribution g.
  • the first calculation unit 60D has regions (region EC, region ED, and region EF) having different peaks (peak P3, peak P4, peak P6).
  • "4" which is the number obtained by adding "1” to "3", which is the number of Z stacks, is calculated as the number of Z stacks.
  • the image 72A constituting the pupil division image 72 is represented by a waveform that returns to the line center of gravity distribution g after intersecting the line center of gravity distribution g of the image 72A at one intersection I. do.
  • the light intensity distribution of the image 72A is divided into two regions EG and EH by the line center of gravity distribution g.
  • the peak P7 of the region EG and the peak P8 of the region EH are different in distance from the line center of gravity distribution g.
  • the first calculation unit 60D is “2”, which is the number of regions (regions EG and EH) having different peaks (peaks P7 and P8).
  • the second calculation unit 60E of the calculation unit 60C calculates the Z stack interval.
  • the second calculation unit 60E calculates the Z stack interval using the pupil division image 72 included in the pupil division image 70.
  • FIG. 8 is a diagram 76 showing an example of the relationship between the focal length and the distance between the image 72A and the image 72B.
  • FIG. 9 is a schematic view of an example of the pupil split image 70F.
  • the pupil division image 70F is an example of the pupil division image 70.
  • the distance L between the image 72A and the image 72B constituting the pupil division image 72 included in the pupil division image 70F is proportional to the focal length between the objective lens 22 and the sample T.
  • the distance L between the image 72A and the image 72B is represented by the distance between the line center of gravity distribution ga which is the line center of gravity distribution g of the image 72A and the line center of gravity distribution gb which is the line center of gravity distribution g of the image 72B.
  • the second calculation unit 60E calculates the distance L between the image 72A and the image 72B constituting the pupil division image 72.
  • the second calculation unit 60E calculates the interval L using the following equations (1) to (3).
  • [Ytt, Ytb] means the range R of the light intensity distribution of the image 72A in the YA axis direction, as shown in FIG.
  • Ytt means the upper end R1 of the light intensity distribution of the image 72A in the YA axis direction.
  • Ytb means the lower end R2 of the light intensity distribution of the image 72A in the YA axis direction.
  • W means the pixel width of the pupil-divided image 70.
  • the pixel width means the width of one pixel in the X-axis direction or the Y-axis direction of the imaging range of the pupil-divided image 70 in the measurement target area 24. In the present embodiment, it is assumed that the width of one pixel in the X-axis direction and the Y-axis direction of the imaging range of the pupil-divided image 70 is the same.
  • a black is the black level average pixel value of the region other than the light receiving region of the pupil division image 72 in the pupil division image 70.
  • a def is a noise level in the region other than the light receiving region of the pupil division image 72 in the pupil division image 70.
  • the second calculation unit 60E calculates the range R of the image 72A using the above formula (1). Further, the first calculation unit 60D calculates the range R of the image 72B in the same manner as the image 72A.
  • the second calculation unit 60E calculates the line center of gravity distribution g of the image 72A using the equation (2).
  • Ytc shows the line center of gravity distribution g of image 72A.
  • the second calculation unit 60E calculates the line center of gravity distribution g of the image 72B in the same manner as in the image 72A.
  • the second calculation unit 60E calculates the interval L between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B using the equation (3).
  • Y phase indicates the distance L between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B.
  • Ybc shows the line center of gravity distribution g of the image 72B.
  • Ytc shows the line center of gravity distribution g of image 72A.
  • the second calculation unit 60E sets the distance between the line center of gravity distribution g of the image 72A and the line center of gravity distribution g of the image 72B in the longitudinal direction (XA axis direction) of the pupil division image 72. It is calculated for each divided area divided along the line.
  • FIG. 10 is an explanatory diagram for calculating the interval L for each divided region.
  • FIG. 10 shows a pupil split image 70G as an example.
  • the pupil division image 70G is an example of the pupil division image 70.
  • the first calculation unit 60D calculates "3" as the number of Z stacks.
  • the second calculation unit 60E divides the pupil division image 72 included in the pupil division image 70 into three division regions E (division areas E1 to division) along the XA axis direction which is the longitudinal direction of the pupil division image 72. Divide into area E3). Then, the second calculation unit 60E calculates the distance L between the images 72A and the image 72B constituting the pupil division image 72 for each division region E by using the above equations (1) to (3). do.
  • the second calculation unit 60E has the interval L1 between the line center of gravity distribution ga1 and the line center of gravity distribution gb1 for the divided region E1, and the line center of gravity distribution ga2 and the line center of gravity distribution for the divided region E2.
  • the interval L3 between the line center of gravity distribution ga3 and the line center of gravity distribution gb3 is calculated as the interval L.
  • the second calculation unit 60E calculates the Z stack interval based on the interval L.
  • the second calculation unit 60E calculates the focus adjustment amount corresponding to the difference between the interval L and the reference interval as the Z stack interval.
  • the reference distance is the distance L between the image 72A and the image 72B when the focal point of the objective lens 22 is in focus on the sample T.
  • the reference interval may be measured in advance using a known contrast method or the like.
  • the second calculation unit 60E has a relationship between the difference between the interval L and the reference interval and the amount of focus adjustment for focusing on the sample T when the images 72A and 72B of the interval L are obtained.
  • the relational information representing the above is calculated in advance and stored in the storage unit 62.
  • the second calculation unit 60E calculates the focus adjustment amount by using the difference between the interval L and the reference interval and the relational information.
  • the second calculation unit 60E may calculate the calculated focus adjustment amount as the Z stack interval.
  • the second calculation unit 60E may calculate the Z stack interval for each division area E based on each interval L of the division area E.
  • the focal position control unit 60G drives at least one of the optical system including the objective lens 22 and the objective lens 22 according to the number of Z stacks and the Z stack interval calculated by the calculation unit 60C.
  • the focus position control unit 60G drives and controls the second drive unit 46 that drives the objective lens 22, which is a focus lens that focuses the line illumination LA on the measurement target area 24.
  • the focal position control unit 60G moves the focal point of the objective lens 22 in the Z-axis direction.
  • the focal position control unit 60G drives and controls the second drive unit 46 to move the number of steps represented by the number of Z stacks and the position of the objective lens 22 in the Z-axis direction toward the stage 26. Move in stages.
  • the focus position control unit 60G may move the position of the objective lens 22 in the Z-axis direction by driving and controlling at least one of the second drive unit 46 and the first drive unit 44.
  • the Z stack interval per step may be a predetermined amount.
  • the focus position control unit 60G may use the Z stack interval calculated by the calculation unit 60C as the Z stack interval per step.
  • the form in which the focus position control unit 60G uses the Z stack interval calculated by the calculation unit 60C will be described as an example.
  • the focus position control unit 60G may use the Z stack interval corresponding to each division area E as the Z stack interval per step. good.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T from the imaging unit 34 for each wobbling step by the focus position control unit 60G.
  • the processing by the focus position control unit 60G and the captured image acquisition unit 60H will be described in detail.
  • FIG. 11 is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 11 shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26.
  • FIG. 11 shows a case where the number of Z stacks is “3” as an example.
  • FIG. 11 shows a form in which different Z stack intervals ( ⁇ Z1 to ⁇ Z3) are calculated as Z stack intervals corresponding to each of the three divided regions E.
  • the focal position control unit 60G moves the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 three times, which is the number of Z stacks, in the thickness direction (see focal position FP1 to focal position FP3). ). Then, each time the focal position control unit 60G moves the focal position FP three times, which is the number of Z stacks, the imaging range S is set to the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24. Move towards.
  • the imaging range S means the imaging range of the imaging unit 34.
  • the focal position control unit 60G moves the imaging range S in the X-axis direction.
  • the focal position control unit 60G drives and controls the first drive unit 44 to move the image pickup range S of the image pickup unit 34 along the X-axis direction by the width W of the image pickup range S.
  • the width W has the same meaning as the pixel width described above.
  • the focus position control unit 60G moves the focus position FP three times, which is the number of Z stacks, in the thickness direction of the measurement target area 24.
  • the focal position control unit 60G moves the imaging range S of the imaging unit 34 in the thickness direction of the measurement target region 24.
  • the focal position control unit 60G moves the imaging range S in the X-axis direction by the width W of the imaging range S each time the focal position FP is moved three times in the thickness direction.
  • the focus position control unit 60G repeatedly executes these series of processes.
  • the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ⁇ Z1 for the divided region E1. Further, the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ⁇ Z2 for the divided region E2. Further, the focal position control unit 60G moves the focal position FP of the objective lens 22 by the Z stack interval ⁇ Z3 for the divided region E3.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. Move to the position in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
  • the focal position control unit 60G sets the imaging range S along the X-axis direction each time the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 is moved once in the thickness direction.
  • the width W of the above may be moved by the movement amount (W / N) obtained by dividing by the number of Z stacks. W indicates the width W, and N indicates the number of Z stacks.
  • the first calculation unit 60D may calculate the movement amount of the focal point, that is, the movement amount of the imaging range S by dividing the width W by the number of Z stacks calculated by the above process.
  • FIG. 12A is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 12A shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 12A shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the X-axis direction.
  • the focal position control unit 60G moves the imaging range S by a distance W / N along the X-axis direction each time the focal position FP is moved in the thickness direction once.
  • the focal position control unit 60G executes this series of processes three times, which is the number of Z stacks, from the surface of the measurement target region 24 toward the stage 26. Then, each time the focal position control unit 60G moves the focal position FP three times in the thickness direction, the imaging range S is returned to the surface side of the measurement target region 24, and the imaging range S is set to a distance W along the X-axis direction. / N Move. Then, the focus position control unit 60G repeatedly executes these series of processes.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. 12A. Move to the position in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
  • the focal position control unit 60G moves the focal point of the objective lens 22 from one end side to the other end side along the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24.
  • the focal position FP may be moved in the thickness direction (Z-axis direction) of the measurement target region 24.
  • FIG. 12B is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 12B shows an example of the XZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 12B shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the X-axis direction. W and N are the same as above.
  • the focal position control unit 60G moves the imaging range S by the width W along the X-axis direction, which is the longitudinal direction of the irradiation region of the line illumination LA in the measurement target region 24. Then, each time the focus position control unit 60G moves the imaging range S from one end to the other end of the measurement target area 24 in the X-axis direction, the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target area 24 by a Z stack interval. In addition, a series of processes for moving the imaging range S by a distance W / N in the X-axis direction is executed three times, which is the number of Z stacks.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 to the positions of the imaging range S1 to S9 in FIG. 12B. Move in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
  • the focal position control unit 60G moves the focal position FP in the thickness direction (Z-axis direction) of the measurement target region 24 once in the thickness direction along the Y-axis direction intersecting the X-axis direction.
  • the imaging range S may be moved by a distance (W / N) obtained by dividing the width W of the imaging range S by the number of Z stacks.
  • the width W in this case may be the width in the Y-axis direction in the imaging range S.
  • the Y-axis direction corresponds to the scanning direction of the line illumination LA.
  • FIG. 13A is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 13A shows an example of a YZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 13A shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the Y-axis direction.
  • the focal position control unit 60G moves the imaging range S by a distance W / N along the Y-axis direction each time the focal position FP is moved in the thickness direction once.
  • the focal position control unit 60G executes this series of processes three times, which is the number of Z stacks, from the surface of the measurement target region 24 toward the stage 26. Then, each time the focal position control unit 60G moves the focal position FP three times in the thickness direction, the imaging range S is returned to the surface side of the measurement target region 24, and the imaging range S is set to a distance W along the Y-axis direction. / N Move. Then, the focus position control unit 60G repeatedly executes these series of processes.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 from the imaging range S1 to the imaging range S9 in FIG. 13A. Move to the position in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24.
  • the focal position control unit 60G focuses each time the focal position of the objective lens 22 is moved from one end side to the other end side along the Y-axis direction, which is the scanning direction of the line illumination LA, in the measurement target area 24.
  • the position FP may be moved in the thickness direction (Z-axis direction) of the measurement target region 24.
  • FIG. 13B is an explanatory diagram of an example of wobbling according to the present embodiment.
  • FIG. 13B shows an example of a YZ cross section of the measurement target region 24 mounted on the stage 26. Further, FIG. 13B shows a mode in which the number of Z stacks is “3” and the width W of the imaging range S is moved by 1/3 in the Y-axis direction. W and N are the same as above.
  • the focal position control unit 60G moves the imaging range S by the width W along the Y-axis direction, which is the scanning direction of the line illumination LA in the measurement target area 24. Then, each time the focus position control unit 60G moves the imaging range S from one end to the other end of the measurement target area 24 in the Y-axis direction, the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target area 24 by a Z stack interval. In addition, a series of processes for moving the imaging range S by a distance W / N in the Y-axis direction is executed three times, which is the number of Z stacks.
  • the focal position control unit 60G moves the focal position FP to each of the focal position FP1 to the focal position FP9, and shifts the imaging range S of the imaging unit 34 to the positions of the imaging range S1 to S9 in FIG. 13B. Move in order.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T each time the focal position FP, that is, the imaging range S is moved. Therefore, the captured image acquisition unit 60H acquires a focused image for each of the imaging ranges S of the imaging range S1 to S9 in the measurement target region 24. Twice
  • the captured image acquisition unit 60H acquires captured images of a plurality of imaging ranges S (imaging range S1 to imaging range S9) having different focal positions FP for one measurement target region 24.
  • the imaging range S1 to the imaging range S9 is an imaging range S in which at least a part of at least one of the thickness direction (Z-axis direction) of the measurement target region 24 and the direction along the XY plane is non-overlapping with each other. Further, each of the captured images in the imaging range S1 to S9 is an captured image captured with each imaging range S as the focal position FP. Therefore, as shown in FIGS. 11 to 13B, even when the sample T is arranged at an angle with respect to the XY plane which is an orthogonal plane orthogonal to the optical axis A2, the captured image acquisition unit 60H Can obtain an image taken by focusing on each position of the sample T.
  • the output control unit 60I outputs the captured image acquired by the captured image acquisition unit 60H to an external device such as the server device 10 via the communication unit 64.
  • the output control unit 60I may store the captured image acquired by the captured image acquisition unit 60H in the storage unit 62. Further, the output control unit 60I may output the captured image to the display connected to the control unit 60.
  • the output control unit 60I may analyze the type of the sample T and the like by analyzing the captured image acquired by the captured image acquisition unit 60H by a known method, and output the analysis result to the server device 10 or the like. ..
  • FIG. 14 is a flowchart showing an example of the flow of information processing executed by the control device 16.
  • the light source control unit 60A controls the light source 18B so as to irradiate the line illumination LA (step S100).
  • the line illumination LA is irradiated from the light source 18B, and the line illumination LA is irradiated to the sample T.
  • the phase difference acquisition unit 60B acquires the pupil division image 72 of the light emitted from the sample T irradiated with the line illumination LA from the pupil division image imaging unit 42 (step S102). By the process of step S102, the phase difference acquisition unit 60B acquires the image 72A and the image 72B, which are the pupil division images 72, as the phase difference.
  • the first calculation unit 60D calculates the number of Z stacks based on the line center of gravity distribution g of the pupil division image 72 acquired in step S102 (step S104).
  • the second calculation unit 60E calculates the distance L between the image 72A and the image 72B constituting the pupil division image 72 acquired in step S102 (step S106).
  • the second calculation unit 60E calculates the Z stack interval based on the interval L calculated in step S106 (step S108).
  • the focus position control unit 60G executes the focus position control (step S110). Specifically, the focus position control unit 60G drives and controls the focus position FP once in the thickness direction (Z-axis direction) of the measurement target region 24 or in the direction along the XY plane. When the focus position control unit 60G moves the focus position FP in the thickness direction of the measurement target region 24, the focus position control unit 60G moves the Z stack interval and the focus position FP calculated in step S108 in the thickness direction. Further, as described with reference to FIGS. 11 to 13B, the focal position control unit 60G sets the focal position FP (imaging range S) in the thickness direction (Z-axis direction) of the measurement target region 24 or in the direction along the XY plane. Move once to.
  • the focal position control unit 60G uses the Z stack spacing corresponding to each of the division regions E calculated in step S108, and uses the Z stack spacing of the corresponding division region E to focus the objective lens 22 in the Z axis direction.
  • the position FP may be moved.
  • the focus position control unit 60G may use the number of Z stacks calculated in step S104 for all the divided regions E of the measurement target region 24.
  • the captured image acquisition unit 60H acquires an captured image of the light emitted from the sample T from the imaging unit 34 each time the focal position control unit 60G moves the focal position FP (imaging range S) (step S112).
  • the output control unit 60I stores the captured image acquired in step S112 in the storage unit 62 (step S114).
  • the control unit 60 determines whether or not to end the process (step S116). For example, the control unit 60 determines whether or not the measurement target region 24 has acquired the captured images of the respective imaging ranges S1 to S9 of the focal position FP1 to the focal position FP9 described with reference to FIGS. 11 to 13B. to decide. Then, when the control unit 60 determines that all of these captured images have been acquired, it determines affirmatively (step S116: Yes) and ends this routine. On the other hand, when the control unit 60 determines negatively (step S116: No), the control unit 60 returns to the step S110.
  • the microscope system 1 of the present embodiment includes an imaging unit 34, an irradiation unit 18, a stage 26, a phase difference acquisition unit 60B, a focal position control unit 60G, and a calculation unit 60C. ..
  • the irradiation unit 18 irradiates the line illumination LA parallel to the first direction (X-axis direction).
  • the stage 26 supports the sample T and can move in the second direction (Y-axis direction) perpendicular to the first direction.
  • the phase difference acquisition unit 60B acquires the phase difference of the light images (image 72A, image 72B) emitted from the sample T by being irradiated with the line illumination LA.
  • the focal position control unit 60G vibrates the optical system including the objective lens 22 or the stage 26 to move the focal point of the objective lens 22 in the third direction (Z-axis direction) which is vertical in each of the first direction and the second direction. ..
  • the calculation unit 60C calculates at least one of the vibration amplitude and the image pickup condition by the image pickup unit 34 based on the phase difference.
  • the sample T may be arranged at an angle with respect to the XY plane, which is an orthogonal plane orthogonal to the optical axis A2.
  • the thickness of the sample T may not be constant.
  • a technique of identifying an optimum focal length from an image captured by an imager arranged at an angle is disclosed. Further, as a conventional technique, for example, a technique for obtaining a Z-stack image while maintaining a constant difference from the focusing depth for prefocus is disclosed.
  • imaging focused on each position of the sample T has not been performed, that is, imaging focused on each position of the sample T in the X-axis direction and the Y-axis direction has been performed.
  • conventionally, in wideband microscopic photography such as focusing on the entire visible light or blue and observing with infrared light, when the wavelength of the light used for focusing and the observation wavelength are different, the shooting wavelength is determined by the chromatic aberration of the lens. Sometimes it was out of focus.
  • the amplitude of vibration of the optical system including the objective lens 22 or the stage 26 is based on the phase difference of the image of the light emitted from the sample T by being irradiated with the line illumination LA. , And at least one of the imaging conditions by the imaging unit 34 is calculated.
  • the microscope system 1 of the present embodiment it is possible to obtain an captured image focused at each position in the thickness direction of the sample T.
  • the microscope system 1 of the present embodiment can obtain a highly accurate captured image focused on each position of the sample T.
  • the imaging conditions including the number of Z stacks, which is the number of samples in the third direction (Z-axis direction), are calculated based on the line center of gravity distribution g of the pupil division image 72. Then, in the microscope system 1, the focal position FP is moved according to the calculated imaging conditions, and each time the focal position FP is moved, an captured image of the light emitted from the sample T is acquired. Therefore, in the present embodiment, focusing and imaging can be performed alternately, and an captured image in which the chromatic aberration of the lens is canceled can be obtained.
  • the microscope system 1 of the present embodiment can obtain a clear captured image in which chromatic aberration is suppressed.
  • the microscope system 1 of the present embodiment uses line illumination LA as the illumination to be irradiated at the time of wobbling. Therefore, the irradiation time of the light on the measurement target region 24 can be shortened as compared with the case where the line-shaped illumination is not used. Therefore, in addition to the above effects, the microscope system 1 of the present embodiment can suppress the fading of the sample T contained in the measurement target region 24.
  • the focal position control unit 60G drives and controls at least one of the first drive unit 44 and the second drive unit 46 that move the relative positions of the objective lens 22 and the measurement target area 24.
  • the mode of moving the focal position FP has been described.
  • the focal position control unit 60G may move the focal position FP by driving and controlling a member that changes the optical path length between the objective lens 22 and the measurement target region 24.
  • FIG. 15 is a schematic view showing an example of the microscope system 1B of this modified example.
  • the microscope system 1B has the same configuration as the microscope system 1 except that it further includes a third drive unit 48 and a variable member 50.
  • the variable member 50 is a member for changing the optical path length between the objective lens 22 that concentrates the line illumination LA on the measurement target area 24 and the measurement target area 24.
  • the variable member 50 is arranged, for example, between the objective lens 22 and the measurement target region 24.
  • FIG. 16A to 16C are schematic views showing an example of the variable member 50.
  • FIG. 16A is an example of a side view of the variable member 50.
  • FIG. 16B is an example of a front view of the variable member 50.
  • FIG. 16C is an example of a perspective view of the variable member 50.
  • the variable member 50 is a disk-shaped member rotatably supported with the rotation center C as the rotation axis.
  • the variable member 50 may be made of a material that transmits line illumination LA.
  • the variable member 50 is made of, for example, optical glass.
  • the variable member 50 includes a plurality of regions 52 (regions 52A to 52C) having different thicknesses along the circumferential direction.
  • the rotation center C of the variable member 50 is connected to the third drive unit 48. By driving the third drive unit 48, the variable member 50 rotates in the direction of arrow Q with the rotation center C as the rotation axis.
  • the rotation of the variable member 50 changes the region 52 (regions 52A to 52C) through which the line illumination LA passes, so that the optical path length changes.
  • the third drive unit 48 is electrically connected to the control device 16.
  • the focal position control unit 60G of the control device 16 drives and controls the third drive unit 48 to rotate the variable member 50 and change the optical path length between the objective lens 22 and the measurement target region 24.
  • the focal position control unit 60G may move the focal position FP.
  • FIG. 17 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of the control device 16 according to the above embodiment and the modified example.
  • the computer 1000 has a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600. Each part of the computer 1000 is connected by a bus 1050.
  • the CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes a process corresponding to the program.
  • the ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program depending on the hardware of the computer 1000, and the like.
  • BIOS Basic Input Output System
  • the HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program.
  • the HDD 1400 is a recording medium for recording the focus adjustment program according to the present disclosure, which is an example of the program data 1450.
  • the communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet).
  • the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
  • the input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000.
  • the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media).
  • the media includes, for example, an optical recording medium such as a DVD (Digital Paris Disc), a PD (Phase change rewritable Disc), a magneto-optical recording medium such as an MO (Magnet-Optical disc), a tape medium, a magnetic recording medium, a semiconductor memory, or the like.
  • an optical recording medium such as a DVD (Digital entirely Disc), a PD (Phase change rewritable Disc), a magneto-optical recording medium such as an MO (Magnet-Optical disc), a tape medium, a magnetic recording medium, a semiconductor memory, or the like.
  • the CPU 1100 of the computer 1000 executes the program loaded on the RAM 1200, thereby executing the light source control unit 60A, the phase difference acquisition unit 60B, and the calculation unit. Functions such as 60C, calculation unit 60C, first calculation unit 60D, second calculation unit 60E, focus position control unit 60G, captured image acquisition unit 60H, and output control unit 60I are realized.
  • the program and data related to the present disclosure are stored in the HDD 1400.
  • the CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program, but as another example, these programs may be acquired from another device via the external network 1550.
  • the present technology can also have the following configurations.
  • Imaging unit and An irradiation unit that irradiates line illumination parallel to the first direction, A stage that supports the sample and can move in the second direction perpendicular to the first direction, A phase difference acquisition unit that acquires the phase difference of an image of light emitted from a sample by being irradiated with the line illumination, and a phase difference acquisition unit.
  • a focal position control unit that vibrates an optical system or stage including an objective lens to move the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction.
  • a calculation unit that calculates at least one of the vibration amplitude and the imaging condition by the imaging unit based on the phase difference.
  • Microscope system with
  • the calculation unit The line center of gravity distribution is calculated based on the phase difference, and at least one of the vibration amplitude and the imaging condition by the imaging unit is calculated.
  • (3) The phase difference acquisition unit The microscope system according to (1) or (2), wherein a pupil-divided image of an image of light emitted from the sample is acquired as the phase difference.
  • (4) The calculation unit Based on the phase difference, the imaging condition including the number of samplings in the third direction is calculated.
  • the calculation unit The number obtained by adding 1 to the number of regions in the image having peaks divided by the line center of gravity distribution and having different distances from the line center of gravity distribution is derived as the sampling number.
  • the calculation unit The amplitude is calculated for each divided region based on the phase difference for each divided region obtained by dividing the pupil division image of the image of light emitted from the sample into the number of samples along the longitudinal direction.
  • the microscope system according to (4).
  • the focal position control unit is At least one of the optical system including the objective lens and the stage is moved to move the focal point of the objective lens in the first direction or the second direction.
  • the calculation unit Based on the phase difference, the imaging condition including the amount of movement of the focal point in the first direction or the second direction is calculated.
  • the microscope system according to any one of (1) to (6).
  • the focal position control unit is Each time the optical system and at least one of the stages are moved to move the focal point of the objective lens in the third direction, the focal point of the objective lens is moved in the first direction or the second direction.
  • the microscope system according to any one of (1) to (7).
  • the focal position control unit is Each time the optical system and at least one of the stages are moved to move the focal point of the objective lens in the first direction or the second direction from one end side to the other end side of the sample, the third direction is obtained.
  • the focal position control unit is The focal point of the objective lens is moved by driving and controlling a variable member that changes the optical path length between the objective lens and the sample.
  • the microscope system according to any one of (1) to (7).
  • (11) A computer that controls an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a stage that supports a sample and can move in a second direction perpendicular to the first direction. but, The step of acquiring the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination, and A step of vibrating an optical system or a stage including an objective lens to move the focal point of the objective lens in a third direction vertical to each of the first direction and the second direction. A step of calculating at least one of the amplitude of the vibration and the imaging condition by the imaging unit based on the phase difference. Imaging method including.
  • An imaging device including a measuring unit and software used to control the operation of the measuring unit.
  • the software is installed in the imaging device and The measuring unit
  • the software includes an imaging unit, an irradiation unit that irradiates line illumination parallel to the first direction, and a stage that supports a sample and can move in a second direction perpendicular to the first direction.
  • the phase difference of the image of the light emitted from the sample by being irradiated with the line illumination is acquired, and the phase difference is obtained.
  • the optical system or stage including the objective lens is vibrated to move the focal point of the objective lens in a third direction that is vertical to each of the first direction and the second direction. Based on the phase difference, at least one of the vibration amplitude and the imaging condition by the imaging unit is calculated.
  • Imaging device is configured to calculate the imaging device.
  • Microscope system 12 Imaging device 18 Irradiation unit 22 Objective lens 24 Measurement target area 34 Imaging unit 42 Eye split image imaging unit 44 1st drive unit 46 2nd drive unit 48 3rd drive unit 50 Variable member 60B Phase difference acquisition unit 60C Calculation Unit 60D 1st calculation unit 60E 2nd calculation unit 60G Focus position control unit 60H Captured image acquisition unit T Specimen

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

La présente invention concerne un système de microscope qui comprend une unité d'imagerie, une unité de rayonnement, un plateau, une unité d'acquisition de différence de phase, une unité de commande de position de point focal et une unité de calcul. L'unité de rayonnement émet un éclairage de ligne qui est parallèle à une première direction. Le plateau supporte un échantillon et peut se déplacer dans une deuxième direction perpendiculaire à la première direction. L'unité d'acquisition de différence de phase émet un éclairage de ligne pour acquérir une différence de phase d'une image de lumière émise par l'échantillon. L'unité de commande de position de point focal amène un système optique comprenant une lentille d'objectif ou le plateau à vibrer pour déplacer le point focal de la lentille d'objectif dans une troisième direction qui est verticale respectivement par rapport à la première direction et à la deuxième direction. L'unité de calcul calcule une amplitude de la vibration et/ou une condition d'imagerie de l'unité d'imagerie sur la base de la différence de phase.
PCT/JP2021/010981 2020-03-27 2021-03-18 Système de microscope, procédé d'imagerie et dispositif d'imagerie WO2021193325A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-057778 2020-03-27
JP2020057778 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021193325A1 true WO2021193325A1 (fr) 2021-09-30

Family

ID=77891710

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/010981 WO2021193325A1 (fr) 2020-03-27 2021-03-18 Système de microscope, procédé d'imagerie et dispositif d'imagerie

Country Status (1)

Country Link
WO (1) WO2021193325A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002323316A (ja) * 2001-04-25 2002-11-08 Nikon Corp 焦点位置検出装置
JP2006011446A (ja) * 2004-06-24 2006-01-12 Fujifilm Electronic Imaging Ltd 複数焦点スタック画像を形成する方法及び装置
US20070069106A1 (en) * 2005-06-22 2007-03-29 Tripath Imaging, Inc. Apparatus and Method for Rapid Microscope Image Focusing
JP2010191000A (ja) * 2009-02-16 2010-09-02 Nikon Corp 顕微鏡および顕微鏡における画像取得方法
JP2016051167A (ja) * 2014-08-29 2016-04-11 キヤノン株式会社 画像取得装置およびその制御方法
WO2019230878A1 (fr) * 2018-05-30 2019-12-05 ソニー株式会社 Dispositif d'observation de fluorescence et procédé d'observation de fluorescence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002323316A (ja) * 2001-04-25 2002-11-08 Nikon Corp 焦点位置検出装置
JP2006011446A (ja) * 2004-06-24 2006-01-12 Fujifilm Electronic Imaging Ltd 複数焦点スタック画像を形成する方法及び装置
US20070069106A1 (en) * 2005-06-22 2007-03-29 Tripath Imaging, Inc. Apparatus and Method for Rapid Microscope Image Focusing
JP2010191000A (ja) * 2009-02-16 2010-09-02 Nikon Corp 顕微鏡および顕微鏡における画像取得方法
JP2016051167A (ja) * 2014-08-29 2016-04-11 キヤノン株式会社 画像取得装置およびその制御方法
WO2019230878A1 (fr) * 2018-05-30 2019-12-05 ソニー株式会社 Dispositif d'observation de fluorescence et procédé d'observation de fluorescence

Similar Documents

Publication Publication Date Title
US9832365B2 (en) Autofocus based on differential measurements
JP3090679B2 (ja) 生物標本の複数の光学的特性を測定する方法および装置
US8027548B2 (en) Microscope system
US10365468B2 (en) Autofocus imaging
US7232980B2 (en) Microscope system
JP5577885B2 (ja) 顕微鏡及び合焦点方法
US8760756B2 (en) Automated scanning cytometry using chromatic aberration for multiplanar image acquisition
JP6408543B2 (ja) 走査型光学ユニットを用いた光照射野の画像化
JP6662529B2 (ja) 光学機器の連続非同期オートフォーカスのためのシステムおよび方法
JPS6394156A (ja) 流体中の細胞分析装置
JPH08320285A (ja) 粒子分析装置
CN112703440A (zh) 显微镜系统
WO2010106928A1 (fr) Dispositif de création d'image, et procédé de création d'image
WO2021167044A1 (fr) Système de microscope, procédé d'imagerie et dispositif d'imagerie
EP3660572A1 (fr) Dispositif d'observation d'échantillon et procédé d'observation d'échantillon
WO2021193325A1 (fr) Système de microscope, procédé d'imagerie et dispositif d'imagerie
US20220146805A1 (en) Microscope system, focus adjustment program, and focus adjustment system
WO2016024429A1 (fr) Dispositif de détection de particules fines
WO2021193177A1 (fr) Système de microscope, procédé d'imagerie et dispositif d'imagerie
JP2002062251A (ja) フロー式粒子画像解析方法及び装置
JP2001304844A (ja) X線蛍光による層厚測定において測定対象の位置を設定する方法
WO2024106237A1 (fr) Système d'observation d'échantillon, dispositif d'observation d'échantillon et procédé d'observation d'échantillon
JPH08145872A (ja) フロー式粒子画像解析方法および装置
JP7427291B2 (ja) イメージングフローサイトメーター
JP2004177732A (ja) 光学測定装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21775192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21775192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP