JP2014240888A - Image acquisition apparatus and focusing method for the image acquisition apparatus - Google Patents

Image acquisition apparatus and focusing method for the image acquisition apparatus Download PDF

Info

Publication number
JP2014240888A
JP2014240888A JP2013122960A JP2013122960A JP2014240888A JP 2014240888 A JP2014240888 A JP 2014240888A JP 2013122960 A JP2013122960 A JP 2013122960A JP 2013122960 A JP2013122960 A JP 2013122960A JP 2014240888 A JP2014240888 A JP 2014240888A
Authority
JP
Japan
Prior art keywords
image
optical path
sample
imaging
optical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2013122960A
Other languages
Japanese (ja)
Other versions
JP6010506B2 (en
Inventor
英資 大石
Hideyori Oishi
英資 大石
Original Assignee
浜松ホトニクス株式会社
Hamamatsu Photonics Kk
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浜松ホトニクス株式会社, Hamamatsu Photonics Kk filed Critical 浜松ホトニクス株式会社
Priority to JP2013122960A priority Critical patent/JP6010506B2/en
Priority claimed from PCT/JP2014/055988 external-priority patent/WO2014174920A1/en
Publication of JP2014240888A publication Critical patent/JP2014240888A/en
Application granted granted Critical
Publication of JP6010506B2 publication Critical patent/JP6010506B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

PROBLEM TO BE SOLVED: To provide an image acquisition apparatus capable of securing the amount of light during imaging and accurately detecting a focal position of a sample, and a focusing method for the same.SOLUTION: An image acquisition apparatus M can form an optical path length difference of a second optical image without branching of light in a second optical path L2, by the use of an optical path difference generating member 21. Thus, the amount of light to the optical path L2, needed to acquire information on a focal position, is suppressed to secure the amount of light during imaging with a first imaging device 18. The image acquisition apparatus M forms, on an imaging surface 20a of a second imaging device 20, a first incident area 24A which the second optical image causing an optical path difference enters and a second incident area 24B which the second optical image causing no optical path difference enters, by the use of the optical path difference generating member 21. Accordingly, a focal point calculation unit 37 can select, depending on the situation, a second image used for analysis of focal point information, and can accurately calculate the focal point information.

Description

  The present invention relates to an image acquisition device and a focus method of the image acquisition device.

  As a conventional image acquisition apparatus, for example, there is an apparatus described in Patent Document 1. In this apparatus, light from a test object is divided by a half prism and received by a photoelectric conversion element formed of a two-dimensional imaging element such as a CCD area sensor. The control circuit of the photoelectric conversion element has a scanning area setting unit that can arbitrarily set two scanning areas for two-dimensionally scanning the light receiving surface. Then, focusing control is executed based on the focusing error signal of the light received in the two scanning areas set by the scanning area setting unit.

JP-A-8-320430

  In the conventional apparatus described above, the light from the test object is divided using a half prism. For this reason, it is difficult to ensure the amount of light in the photoelectric conversion element, and the detection accuracy of the focal position of the sample may be reduced. Further, when the light amount of the focus position detection light is increased, the light amount of the imaging light of the test object is decreased, and it may be difficult to secure the light amount at the time of imaging.

  The present invention has been made to solve the above-described problems, and an object of the present invention is to provide an image acquisition apparatus that can secure a light amount during imaging and can accurately detect the focal position of a sample and a focusing method thereof. To do.

  In order to solve the above problems, an image acquisition apparatus according to the present invention includes a stage on which a sample is placed, a light source that irradiates light toward the sample, and an objective lens that is disposed so as to face the sample on the stage. , And a light guide optical system including a light branching unit that branches the optical image of the sample into the first optical path for image acquisition and the second optical path for focus control, and the first light branched into the first optical path First imaging means for acquiring a first image by an image, second imaging means for acquiring a second image by a second optical image branched into a second optical path, and analyzing the second image And a focal point calculation unit that calculates focal point information of the sample based on the analysis result, and an optical path difference that causes an optical path difference in the second optical image along the in-plane direction of the imaging surface of the second imaging unit. An optical path difference generating member, wherein the optical path difference generating member has a second optical path difference on the imaging surface of the second imaging means. The in-focus calculation means is arranged in the second optical path so as to form a first incident area where the image is incident and a second incident area where the second optical image where no optical path difference is incident is formed. One of the second image acquired in the first incident area and the second image acquired in the second incident area is selected as a second image used for analysis.

  In this image acquisition device, the optical path length difference of the second optical image can be formed without branching the light in the second optical path for focus control by arranging the optical path difference generating member. Therefore, the amount of light to the second optical path necessary for obtaining the focal position information can be suppressed, and the amount of light when the first imaging unit performs imaging can be ensured. Further, in this image acquisition device, due to the arrangement of the optical path difference generating member, an optical path difference is generated between the first incident area where the second optical image having the optical path difference is incident on the imaging surface of the second imaging unit. And a second incident region on which no second light image is incident. Thereby, in the focal point calculation means, it becomes possible to select which of the second image incident on the first incident area and the second image incident on the second incident area is used for the analysis. Since an optimum focal point information calculation method can be selected according to the degree of unevenness of the sample, the focal point information can be calculated with high accuracy.

  In addition, the optical path difference generating member and the second imaging unit may be arranged in the second optical path so that light from the imaging position of the first imaging unit in the sample enters the second incident region. preferable. In this case, the calculation process of the focal point calculation by the focal point calculation means can be simplified.

  Further, the image processing apparatus further includes an image evaluation unit that evaluates the first image acquired by the first imaging unit, and the in-focus calculation unit performs the first incidence based on the evaluation result of the first image by the image evaluation unit. It is preferable to select one of the second image acquired in the region and the second image acquired in the second incident region. Depending on the calculation method of the in-focus information, there may be a case where the accuracy of calculating the in-focus information decreases due to, for example, the degree of unevenness of the sample. In such a case, which of the second image acquired in the first incident region and the second image acquired in the second incident region is used for the analysis based on the evaluation result of the first image. By selecting, the accuracy of the in-focus information can be secured.

  The optical path difference generating member further includes a visual field driving means for moving the visual field position of the objective lens with respect to the sample, and the optical path difference generating member is a second one on the imaging surface of the second imaging means as the visual field driving means moves the visual lens position. 2 has a first portion whose thickness continuously changes along the moving direction of the optical image, and a second portion having a constant thickness, and the first portion is incident on the imaging surface as a first incident. Preferably, the region is formed, and the second portion forms the second incident region on the imaging surface. In this case, the first incident region and the second incident region can be formed on the imaging surface of the second imaging unit with a simple configuration of the optical path difference generating member.

  The second imaging means has a two-dimensional imaging device having a plurality of pixel rows and capable of rolling readout, and rolling readout of each pixel row in synchronization with the movement of the visual field position of the objective lens by the visual field driving means. It is preferable to acquire a 2nd image by performing. In this case, since the optical path difference generating member is arranged in the second optical path, the image data from each pixel row includes contrast information equivalent to that when the focal position of the objective lens is changed in the same part of the sample. Is included. Therefore, the in-focus information can be calculated quickly and accurately based on the contrast information.

  Further, when the second image acquired in the first incident region is used for analysis, the in-focus calculation means is read out by at least two pixel columns among the pixel columns of the two-dimensional image sensor. It is preferable to calculate the focal point information of the sample based on the difference between the contrast values of the image data.

  Moreover, when using the 2nd image acquired by the 2nd incident area for an analysis, it is preferable that a focus calculation means produces a focus map based on the calculated focus information. In this case, the focus map can be created with high accuracy.

  Further, the focus method of the image acquisition apparatus according to the present invention includes a stage on which a sample is placed, a light source that irradiates light toward the sample, an objective lens that is disposed so as to face the sample on the stage, and By a light guide optical system including a light branching unit that branches a light image of a sample into a first light path for image acquisition and a second light path for focus control, and a first light image branched into the first light path A first imaging means for acquiring a first image, a second imaging means for acquiring a second image based on a second optical image branched into a second optical path, and analyzing the second image, A focal point calculation unit that calculates focal point information of the sample based on the analysis result, and an optical path difference generation member that generates an optical path difference in the second optical image along the in-plane direction of the imaging surface of the second imaging unit. And a focusing method of an image acquisition apparatus comprising: an imaging surface of a second imaging unit; An optical path difference generating member is formed so that a first incident area where a second optical image in which a path difference has occurred enters and a second incident area in which a second optical image without an optical path difference enters. Is arranged in the second optical path, and the second image acquired in the first incident region and the second image acquired in the second incident region are used for the analysis. It is characterized by having the in-focus calculation means select it as an image.

  In this focusing method of the image acquisition device, the optical path length difference of the second optical image can be formed without splitting the light in the second optical path for focus control by arranging the optical path difference generating member. Therefore, the amount of light to the second optical path necessary for obtaining the focal position information can be suppressed, and the amount of light when the first imaging unit performs imaging can be ensured. Further, in the focusing method of the image acquisition device, the first incident region where the second optical image in which the optical path difference has occurred is incident on the imaging surface of the second imaging unit by the arrangement of the optical path difference generating member, and the optical path And a second incident region on which a second optical image that does not cause a difference is incident. Thereby, in the focal point calculation means, it becomes possible to select which of the second image incident on the first incident area and the second image incident on the second incident area is used for the analysis. Since an optimum focal point information calculation method can be selected according to the degree of unevenness of the sample, the focal point information can be calculated with high accuracy.

  According to the present invention, it is possible to secure the amount of light at the time of imaging and to accurately detect the focal position of the sample.

It is a figure which shows one Embodiment of the macro image acquisition apparatus which comprises the image acquisition apparatus which concerns on this invention. It is a figure which shows one Embodiment of the micro image acquisition apparatus which comprises the image acquisition apparatus which concerns on this invention. It is a figure which shows an example of a 2nd imaging device. It is a figure which shows an example of the combination of an optical path difference production | generation member and a 2nd imaging device. It is a block diagram which shows the functional component of an image acquisition apparatus. It is a figure which shows an example of the scanning of the visual field of the objective lens with respect to a sample. It is a figure which shows the mode of the synchronization of the movement of the predetermined site | part of the sample within the visual field of an objective lens, and the rolling reading of a 2nd imaging device, (a) is the positional relationship of the visual field of an objective lens, and a division area, ( b) The predetermined part of the sample for each pixel column and the timing of exposure and readout of the image sensor are shown. It is a figure which shows the subsequent state of FIG. It is a figure which shows the subsequent state of FIG. It is a figure which shows the analysis result of the contrast value in case the distance to the surface of a sample corresponds with the focal distance of an objective lens. It is a figure which shows the analysis result of the contrast value in case the distance to the surface of a sample is longer than the focal distance of an objective lens. It is a figure which shows the analysis result of the contrast value in case the distance to the surface of a sample is shorter than the focal distance of an objective lens. It is a figure which shows an example of a focus map. It is a figure which shows an example of the relationship between the visual field of an objective lens, and a focus information acquisition position. It is a figure which shows an example of the contrast information processed with a focusing calculation part. It is a figure which shows the other example of the relationship between the visual field of an objective lens, and a focus information acquisition position. 3 is a flowchart showing an operation when dynamic prefocus is performed in the image acquisition apparatus shown in FIGS. 1 and 2. 3 is a flowchart showing an operation when a focus map is created in the image acquisition device shown in FIGS. 1 and 2. It is a figure which shows the modification of an optical path difference production | generation member. It is a figure which shows another modification of an optical path difference production | generation member.

  DESCRIPTION OF EMBODIMENTS Hereinafter, preferred embodiments of an image acquisition device and a focus method of the image acquisition device according to the present invention will be described in detail with reference to the drawings.

  FIG. 1 is a diagram showing an embodiment of a macro image acquisition device constituting an image acquisition device according to the present invention. FIG. 2 is a diagram showing an embodiment of a micro image acquisition device constituting the image acquisition device according to the present invention. As shown in FIGS. 1 and 2, the image acquisition device M includes a macro image acquisition device M1 that acquires a macro image of the sample S and a micro image acquisition device M2 that acquires a micro image of the sample S. . The image acquisition device M sets, for example, a plurality of line-shaped divided regions 40 (see FIG. 6) with respect to the macro image acquired by the macro image acquisition device M1, and each divided region 40 is subjected to high magnification by the micro image acquisition device M2. It is a device that generates a virtual slide image that is a digital image by acquiring and synthesizing.

  As shown in FIG. 1, the macro image acquisition apparatus M1 includes a stage 1 on which a sample S is placed. The stage 1 is an XY stage that is driven in the horizontal direction by a motor or actuator such as a stepping motor (pulse motor) or a piezoelectric actuator. The sample S to be observed with the image acquisition device M is a biological sample such as a cell, for example, and is placed on the stage 1 in a state of being sealed with a slide glass. By driving the stage 1 in the XY plane, the imaging position with respect to the sample S can be moved.

  The stage 1 can reciprocate between the macro image acquisition device M1 and the micro image acquisition device M2, and has a function of transporting the sample S between both devices. In macro image acquisition, the entire image of the sample S may be acquired by one imaging, or the sample S may be divided into a plurality of regions and imaged. The stage 1 may be provided in both the macro image acquisition device M1 and the micro image acquisition device M2.

  A light source 2 that irradiates light toward the sample S and a condenser lens 3 that condenses the light from the light source 2 onto the sample S are disposed on the bottom side of the stage 1. The light source 2 may be arranged so as to irradiate light obliquely toward the sample S. A light guide optical system 4 that guides a light image from the sample S and an imaging device 5 that captures a light image of the sample S are disposed on the upper surface side of the stage 1. The light guide optical system 4 includes an imaging lens 6 that forms an optical image from the sample S on the imaging surface of the imaging device 5. The imaging device 5 is an area sensor that can acquire a two-dimensional image, for example. The imaging device 5 acquires the entire image of the light image of the sample S incident on the imaging surface via the light guide optical system 4 and stores it in a virtual slide image storage unit 39 described later.

  As shown in FIG. 2, the micro image acquisition device M2 has a light source 12 and a condenser lens 13 similar to those of the macro image acquisition device M1 on the bottom surface side of the stage 1. A light guide optical system 14 that guides a light image from the sample S is disposed on the upper surface side of the stage 1. The optical system that irradiates the sample S with light from the light source 12 employs an excitation light irradiation optical system for irradiating the sample S with excitation light and a dark field illumination optical system for acquiring a dark field image of the sample S. May be.

  The light guide optical system 4 includes an objective lens 15 disposed so as to face the sample S, and a beam splitter (light branching means) 16 disposed at a subsequent stage of the objective lens 15. The objective lens 15 is provided with a motor or actuator such as a stepping motor (pulse motor) or a piezoelectric actuator that drives the objective lens 15 in the Z direction along the optical axis direction of the objective lens 15. By changing the position of the objective lens 15 in the Z direction by these driving means, the focal position of the imaging in the image acquisition of the sample S can be adjusted. The focus position may be adjusted by changing the position of the stage 1 in the Z direction, or by changing the positions of both the objective lens 15 and the stage 1 in the Z direction. In any form, the distance between the objective lens 15 and the stage 1 can be adjusted.

  The beam splitter 16 is a part that branches the optical image of the sample S into a first optical path L1 for image acquisition and a second optical path L2 for focus control. The beam splitter 16 is disposed at an angle of about 45 degrees with respect to the optical axis from the light source 12. In FIG. 2, the optical path passing through the beam splitter 16 is the first optical path L1, and the beam splitter 16 The optical path reflected by 16 is the second optical path.

  In the first optical path L 1, an imaging lens 17 that forms an optical image (first optical image) of the sample S that has passed through the beam splitter 16, and an imaging surface is disposed at the imaging position of the imaging lens 17. A first imaging device (first imaging means) 18 is arranged. The first imaging device 18 is a device that can acquire a one-dimensional image (first image) based on a first light image of the sample S. For example, a two-dimensional CCD image sensor capable of TDI (Time Delay Integration) driving. A line scan image sensor such as a line sensor is used. Further, if the method of sequentially acquiring the images of the sample S while controlling the stage 1 at a constant speed, the first imaging device 18 is a device that can acquire a two-dimensional image such as a CMOS sensor or a CCD sensor. There may be. The first images picked up by the first image pickup device 18 are sequentially stored in a temporary storage memory such as a lane buffer, and then compressed and output to an image generation unit 38 to be described later.

  On the other hand, in the second optical path L2, a field adjustment lens 19 for reducing the optical image (second optical image) of the sample reflected by the beam splitter 16, a second imaging device (second imaging means) 20, and Is arranged. In addition, an optical path difference generation member 21 that causes an optical path difference in the second optical image is disposed in the front stage of the second imaging device 20. The field adjustment lens 19 is preferably configured so that the second light image is formed on the second imaging device 20 with the same size as the first light image.

  The second imaging device 20 is a device that can acquire a two-dimensional image (second image) based on the second optical image of the sample S. The second imaging device 20 has a two-dimensional imaging device that has a plurality of pixel columns and can perform rolling readout. An example of such a two-dimensional imaging device is a CMOS image sensor. The imaging surface 20a of the second imaging device 20 is disposed so as to substantially coincide with the XZ plane orthogonal to the second optical path L2. On the imaging surface 20a of the second imaging device 20, as shown in FIG. 3A, a plurality of pixel columns 20b in which a plurality of pixels are arranged in a direction perpendicular to the readout direction are arranged in the readout direction. .

  In the second imaging device 20, as shown in FIG. 3B, a reset signal, a readout start signal, and a readout end signal are output based on the drive cycle of the drive clock, so that each pixel column 20b is output. Exposure and readout are controlled. The exposure period of one pixel column 20b is a period from discharge of charge accompanying a reset signal to reading of charge accompanying a read start signal. In addition, the readout period of one pixel column 20b is a period from the start of charge readout accompanying the readout start signal to the end of charge readout accompanying the readout end signal. Note that a read start signal for the next pixel column can also be used as a read end signal.

  In rolling readout, readout start signals output for each pixel column 20b are sequentially output at a predetermined time difference. The reading speed in the rolling reading is controlled by the time interval of the reading start signal for reading each pixel column 20b. If the time interval of the read start signal is shortened, the read speed is increased, and if the time interval of the read start signal is increased, the read speed is decreased. The adjustment of the reading interval between the adjacent pixel columns 20b and 20b can be performed, for example, by adjusting the frequency of the driving clock, setting the delay period during the reading period, and changing the number of clocks that define the reading start signal.

  The optical path difference generating member 21 is a glass member that generates an optical path difference in the second optical image along the in-plane direction of the imaging surface 20a. In the example shown in FIG. 4A, the optical path difference generating member 21 is configured by combining a first member 22 having a prism shape with a right-angled triangular section and a flat plate-like second member 23. The first member 22 has a thickness continuously along the in-plane direction of the imaging surface 20a, that is, the moving direction (Z direction) of the second optical image on the imaging surface 20a accompanying the scanning of the sample S. It arrange | positions corresponding to the one end side area | region (upper half area | region) of the imaging surface 20a so that it may increase. The second member 23 has a constant thickness and is disposed corresponding to the other end side region (lower half region) of the imaging surface 20a.

  In the optical path difference generating member 21 and the second imaging device 20, the light from the planned imaging position (position to be imaged) in the first imaging device 18 in the sample S is one end side region (upper half region) of the imaging surface 20a. ). The optical path difference generating member 21 and the second imaging device 20 are configured such that light from the imaging position of the sample S in the first imaging device 18 enters the other end side region (lower half region) of the imaging surface 20a. Is arranged.

  With such an arrangement of the optical path difference generating member 21, the first optical image in which the optical path difference occurs is incident on the imaging surface 20a of the second imaging device 20 as shown in FIG. 4B. The incident region 24A and the second incident region 24B on which the second light image that does not cause an optical path difference is formed. In the first incident region 24A, the second optical image incident on the imaging surface 20a has a longer optical path from the upper end side toward the center side. In the second incident region 24B, the second optical image incident on the imaging surface 20a has the same optical path at any position. The first member 22 and the second member 23 are preferably arranged so that the surface facing the second imaging device 20 is parallel to the imaging surface 20a of the second imaging device. Thereby, refraction of light by the surface facing the second imaging device 20 can be reduced, and the amount of light received by the second imaging device 20 can be ensured.

  In the optical path difference generating member 21, the thickness of the second member 23 is preferably equal to the thickness of the first member 22 at the position of the pixel row 20b corresponding to the focus center. As a result, the optical path length of the second optical image incident on the second incident area 24B becomes equal to the optical path length of the first optical image incident on the first imaging device 18, so that the in-focus calculation unit 37 The calculation process of the in-focus information can be simplified.

  FIG. 5 is a block diagram illustrating functional components of the image acquisition apparatus. As shown in the figure, the image acquisition apparatus M includes a computer system including a CPU, a memory, a communication interface, a storage unit such as a hard disk, an operation unit 31 such as a keyboard, a monitor 32, and the like. Further, the image acquisition apparatus M includes, as functional components of the control unit 33, a stage driving unit 34, an objective lens driving unit 35, an operation control unit 36, a focal point calculation unit 37, and an image generation unit 38. The virtual slide image storage unit 39 and the image evaluation unit 41 are provided.

  The stage driving unit 34 is a part that functions as a visual field driving unit that moves the visual field position of the objective lens 15 with respect to the sample S. The stage drive unit 34 is configured by a motor or actuator such as a stepping motor (pulse motor) or a piezoelectric actuator, for example. The stage drive unit 34 moves the stage 2 in the XY directions with respect to a plane having a predetermined angle (for example, 90 degrees) with respect to a plane orthogonal to the optical axis of the objective lens 15 based on the control by the operation control unit 36. Thereby, the sample S fixed to the stage 2 moves with respect to the optical axis of the objective lens 15, and the visual field position of the objective lens 15 with respect to the sample S moves.

  More specifically, the stage driving unit 34 scans the stage 1 on which the sample S is placed at a predetermined speed based on the control by the operation control unit 36. The scanning of the stage 1 relatively sequentially moves the imaging field of the sample S in the first imaging device 18 and the second imaging device 20. In order to image the entire sample S, in the image acquisition apparatus M, the operation control unit 36 has the objective lens 15 for the sample S in the scanning direction along the imaging line Ln (n is a natural number) composed of a plurality of divided regions 40. Control to move the visual field position. For the movement of the visual field position of the objective lens 15 with respect to the sample S between the adjacent imaging lines Ln, for example, as shown in FIG. 6, a unidirectional scan in which the scanning direction is the same between the adjacent imaging lines Ln is adopted. .

  Further, although the scanning speed of the stage 1 during image acquisition is constant, there is actually a period in which the scanning speed is unstable due to the influence of vibration of the stage 1 immediately after the start of scanning. For this reason, each of the acceleration period in which the scanning width longer than that of the divided region 40 is set, the stage 1 is accelerated, the stabilization period until the scanning speed of the stage 1 is stabilized, and the deceleration period in which the stage 1 is decelerated is It is preferable that this occurs when scanning outside the divided area 40. Thereby, it is possible to acquire an image in accordance with a constant speed period in which the scanning speed of the stage 1 is constant. Note that imaging may be started during the stabilization period, and the data portion acquired during the stabilization period after image acquisition may be deleted. Such a method is suitable when using an imaging device that requires idle reading of data.

  Similar to the stage drive unit 34, the objective lens drive unit 35 is configured by a motor or actuator such as a stepping motor (pulse motor) or a piezoelectric actuator. The objective lens driving unit 35 moves the objective lens 15 in the Z direction along the optical axis of the objective lens 15 based on the control by the operation control unit 36. Thereby, the focal position of the objective lens 15 with respect to the sample S moves.

  The objective lens driving unit 35 does not drive the objective lens 15 during the analysis of the focal position by the in-focus calculation unit 37, and moves the objective lens 15 in the Z direction until the analysis of the next focal position is started. It is preferable to drive only along one direction. In this case, during the scanning of the sample S, the analysis period of the focal position and the objective lens driving period based on the analysis result occur alternately. By not changing the positional relationship between the objective lens 15 and the sample S during the analysis of the focal position, the analysis accuracy of the focal position can be ensured.

  The operation control unit 36 is a part that controls the operations of the second imaging device 20 and the stage driving unit 34. More specifically, the operation control unit 36 has a visual field V of the objective lens 15 by the stage driving unit 34 so that an optical image of a predetermined part in the sample S is exposed by each pixel row 20b of the second imaging device 20. The movement of the predetermined portion of the sample S in the inside and the rolling readout of the second imaging device 20 are synchronized.

  As shown in FIG. 7A, the operation control unit 36 moves the sample S within the visual field V of the objective lens 15 at a constant speed when the visual field V of the objective lens 15 moves in one divided region 40. The stage drive unit 34 is controlled so that Further, as illustrated in FIG. 7B, the operation control unit 36 moves the imaging Sb of the optical image of the sample S on the imaging surface 20 a of the second imaging device 20 and each pixel column on the imaging surface 20 a. The stage drive unit 34 and the second imaging device 20 are controlled so that the reading direction of 20b matches. When using an imaging device that can variably set the reading speed for rolling reading, the operation control unit 36 changes the reading speed for rolling reading based on the moving speed of the sample S within the field of view V of the objective lens 15. May be.

  The exposure time in each pixel row 20b is set based on at least the width of the predetermined portion Sa of the sample S in the scanning direction and the moving speed of the predetermined portion Sa of the sample S within the field of view V of the objective lens 15. More preferably, the magnifications of the objective lens 15 and the field adjustment lens 19 are also considered. Thereby, the optical image of the predetermined part Sa of the sample S can be exposed by each pixel row 20b.

  At time T1, as shown in FIG. 7B, the imaging Sb of light from the predetermined portion Sa of the sample S on the imaging surface 20a of the second imaging device 20 is the first pixel row 20b in the imaging region. Is reached, exposure of the first pixel row 20b is started. At time T2, as shown in FIG. 8A, the position of the predetermined portion Sa of the sample S within the field of view V of the objective lens 15 is moved. At this time, as shown in FIG. 8B, the image Sb of light from the predetermined portion Sa of the sample S reaches the second pixel row 20b in the imaging region, and the second pixel row 20b. The exposure is started. Further, at the timing when the image Sb of light from the predetermined portion Sa of the sample S passes through the first pixel row 20b, reading of the first pixel row 20b is started.

  Further, at time T3, as shown in FIG. 9A, the position of the predetermined portion Sa of the sample S in the field of view V of the objective lens 15 further moves in the scanning direction. At this time, as shown in FIG. 9B, the imaging Sb of the light from the predetermined portion Sa of the sample S reaches the third pixel row 20b of the imaging region, and the third pixel row 20b. The exposure is started. Further, reading of the second pixel row 20b is started at the timing when the image Sb of light from the predetermined portion Sa of the sample S passes through the second pixel row 20b. Furthermore, the reading of the first pixel column 20b is completed simultaneously with the reading of the second pixel column 20b.

  Thereafter, the movement of the predetermined portion Sa of the sample S within the field of view V of the objective lens 15 and the rolling readout in the pixel column 20b are performed in the same procedure until the predetermined number of pixel columns is reached. The image data read from each pixel row 20b is all image data for the same part of the sample S. Further, since the optical path difference generating member 21 is arranged in the second optical path L2, the image data read from each pixel row 20b is obtained when the focal position of the objective lens 15 is changed for the same part of the sample S, respectively. Contrast information equivalent to is included. The image data read by each pixel row 20b is sequentially output to the in-focus calculation unit 37.

  When the objective lens driving unit 35 can move the light guide optical system 14 including the objective lens 15 in the X and Y directions, the operation control unit 36 determines that the optical image of the predetermined part in the sample S is the second imaging device 20. The movement of a predetermined portion of the sample S within the field of view V of the objective lens 15 by the objective lens driving unit 35 and the rolling readout of the second imaging device 20 may be synchronized so that each pixel row 20b is exposed. . In this case, the objective lens driving unit 35 functions as a visual field driving unit that moves the visual field position of the objective lens 15 with respect to the sample S.

  The focal point calculation unit 37 is a part that analyzes the second image acquired by the second imaging device 20 and calculates the focal point information of the sample S based on the analysis result. Based on the instruction information from the image evaluation unit 41, the in-focus calculation unit 37 belongs to the image data read from the pixel row 20b belonging to the first incident area 24A on the imaging surface 20a and the second incident area 24B. One of the image data read from the pixel row 20b is selected as a second image used for analysis.

  When image data read from the pixel row 20b belonging to the first incident area 24A is used for analysis, the image acquisition apparatus M first acquires in-focus information of the sample S in the divided area 40 to be imaged next. Dynamic prefocus performed immediately before imaging by the imaging device 18 is executed. In this case, the focal point calculation unit 37 uses, for example, a front pin / rear pin method as a method for calculating the focal point information.

  When the front pin / rear pin method is used, the in-focus calculation unit 37 selects the pixel column 20b of at least two pixel columns 20b among the pixel columns 20b of the second imaging device 20. As described above, in the second optical path L2, the thickness continuously increases along the moving direction (Z direction) of the second optical image on the imaging surface 20a accompanying the scanning of the sample S. The optical path difference generating member 21 is arranged, and the second light image in which the optical path difference has occurred enters the first incident area 24A of the imaging surface 20a of the second imaging device 20. Therefore, in the first incident region 24A, based on the positions of the two pixel columns 20b to be selected, a light image (front pin) focused before the first light image incident on the first imaging device 18 is selected. ) And a focused optical image (rear pin) later. The focal point calculation unit 37. The difference between the contrast values of the image data read by the selected pixel row 20b is obtained.

  As shown in FIG. 10, when the focal position of the objective lens 15 is in alignment with the surface of the sample S, the image contrast value of the front pin and the image contrast value of the rear pin substantially coincide, and the difference between these values is Nearly zero. On the other hand, as shown in FIG. 11, when the distance to the surface of the sample S is longer than the focal length of the objective lens 15, the image contrast value of the rear pin becomes larger than the image contrast value of the front pin. The difference value is positive. In this case, the focal point calculation unit 37 outputs instruction information to the effect that the objective lens 15 is driven in a direction approaching the sample S to the objective lens driving unit 35. In addition, as shown in FIG. 12, when the distance to the surface of the sample S is shorter than the focal length of the objective lens 15, the image contrast value of the rear pin is smaller than the image contrast value of the front pin. The difference value is negative. In this case, the focal point calculation unit 37 outputs instruction information indicating that the objective lens 15 is driven in a direction away from the sample S to the objective lens driving unit 35.

  When the front pin / rear pin method is used, the in-focus calculation unit 37 has a pixel column 20b corresponding to the front pin and a pixel column corresponding to the rear pin so as to be symmetrical with respect to the pixel column 20b corresponding to the focus center. 20b is selected. The pixel row 20b corresponding to the focus center is a sample that passes through the second optical path L2 and the optical path difference generation member 21 with an optical path length that matches the optical path length of the optical image of the sample S imaged by the first imaging device 18. This refers to the pixel row 20b on which the S light image is incident. For example, when the pixel column 20b corresponding to the focus center is the kth pixel column 20b, the in-focus calculation unit 37, for example, the (k−m) th pixel column 20b and the (k + m) th pixel. Each of the columns 20b is selected. By setting m according to the degree of unevenness of the sample S, the accuracy of the in-focus information can be improved.

  When the image data read from the pixel row 20b belonging to the second incident area 24B is used for analysis, the image acquisition device M obtains in-focus information on a plurality of locations of the sample S before imaging the sample S. Obtain a focus map in advance. In this case, the image acquisition device M sets the image acquisition region P so that the entire sample S is included based on the macro image of the sample S acquired by the macro image acquisition device M1. After the image acquisition area is set, for example, as shown in FIG. 13, a plurality of focus information acquisition positions Q that are spaced at equal intervals are set in the image acquisition area P in a grid pattern. The interval between the image acquisition regions P is appropriately set according to the size of the sample S and the like. The interval between the image acquisition regions P may not be equal and may be set at random.

  In creating the focus map, the in-focus calculation unit 37 uses, for example, a contrast distribution method as the in-focus information calculation method. In this case, for example, as shown in FIG. 14, the visual field V of the objective lens 15 is set in the vicinity of the focus information acquisition position Q. Next, the stage 1 is driven by the stage drive unit 34, and the visual field V of the objective lens 15 is moved toward the focus information acquisition position Q at a constant speed.

  When the contrast distribution method is used, the focal point calculation unit 37 acquires the contrast information of the image data from the plurality of pixel columns 20 b of the second imaging device 20. In the example illustrated in FIG. 15, the contrast value of the image data from the first pixel column 20 b to the n-th pixel column 20 b in the second imaging device 20 is illustrated, and the pixel in the i-th column. The contrast value of the image data in the column 20b is a peak value. In this case, the in-focus calculation unit 37 generates in-focus information assuming that the focus position of the objective lens 15 is the in-focus position when the predetermined portion Sa of the sample S is exposed in the i-th pixel row 20b. To do. The contrast value may be a contrast value in a specific pixel among the pixels included in each pixel column 20b, or an average value of contrast values of all or a part of the pixels included in each pixel column 20b. May be.

  In creating the focus map, for example, as shown in FIG. 16, the field of view V of the objective lens 15 may be set so as to include the focus information acquisition position Q. In this case, the stage 1 is not driven by the stage driving unit 34, and the second imaging device 20 acquires the second image while the objective lens driving unit 35 moves the focal position of the objective lens 15 by the focal depth. I do. In this case, the in-focus calculation unit 37 generates in-focus information assuming that the focus position of the objective lens 15 is the in-focus position when the image data having the largest contrast value is acquired among the acquired image data.

  The image generation unit 38 is a part that combines the acquired images to generate a virtual slide image. The image generation unit 38 sequentially receives the first image output from the first imaging device 18, that is, the image of each divided region 40, and synthesizes them to synthesize the entire image of the sample S. Then, an image having a lower resolution than this is created based on the composite image, and the high resolution image and the low resolution image are associated with each other and stored in the virtual slide image storage unit 39. In the virtual slide image storage unit 39, an image acquired by the macro image acquisition device M1 may be further associated. The virtual slide image may be stored as a single image, or may be stored as a plurality of divided images.

  The image evaluation unit 41 is a part that evaluates the first image acquired by the first imaging device 18. When the image acquisition device M calculates the focal point information using the second image acquired in the first incident area 24A, the image evaluation unit 41 performs the first evaluation based on the focal point information. The contrast value of the first image acquired by the imaging device 18 is acquired. When the contrast value of the first image is equal to or greater than a predetermined threshold, the image evaluation unit 41 instructs the image data read from the pixel row 20b belonging to the first incident area 24A to be used for analysis. The instruction information is output to the in-focus calculation unit 37. Further, when the contrast value of the first image is less than the predetermined threshold, the image evaluation unit 41 uses the image data read from the pixel row 20b belonging to the second incident area 24B for analysis. The instruction information to be instructed is output to the in-focus calculation unit 37.

  Subsequently, a focusing operation in the above-described image acquisition apparatus M will be described.

  In the image acquisition device M, usually, the focal point information is calculated using the second image acquired by the image acquisition device M in the first incident region 24A, and the sample S in the divided region 40 to be imaged next. The dynamic prefocus is performed in which the in-focus information is acquired immediately before the image capturing by the first image capturing device 18. In this case, as shown in FIG. 17, when the movement of the stage 1 is started by the stage driving unit 34, the field of view V of the objective lens 15 moves along one imaging line Ln (step S11). Further, the predetermined portion Sa of the sample S in the field of view V of the objective lens 15 is formed so that the image Sb of the light image from the predetermined portion Sa in the sample S is exposed in each pixel row 20b of the second imaging device 20. Is synchronized with the rolling readout of the second imaging device 20 (step S12).

  Then, the focal point information in the divided region 40 is calculated based on the contrast value of the image data acquired by the pixel row 20b corresponding to the front pin and the rear pin (step S13), and the objective lens 15 is based on the calculated focal point information. Is adjusted, and the divided region 40 is imaged (step S14). Thereafter, it is determined whether or not the calculation of the focal point information has been completed for the desired imaging line Ln (step S15). If the acquisition of the focal point information has not been completed, the next imaging line Ln is determined. The field of view V of the objective lens 15 is moved (step S16), and the processes of steps S11 to S15 are repeatedly executed.

  Further, in the image acquisition device M, when the image evaluation unit 41 evaluates that the contrast value of the first image captured by the first imaging device 18 is less than a predetermined threshold value, the second incident is performed. Processing is switched so that image data read from the pixel row 20b belonging to the region 24B is used for analysis, and a focus map of the sample S is created before the sample S is imaged. In this case, as shown in FIG. 18, when the movement of the stage 1 by the stage driving unit 34 is started, the visual field V of the objective lens 15 moves along one imaging line Ln (step S21). Further, the predetermined portion Sa of the sample S in the field of view V of the objective lens 15 is formed so that the image Sb of the light image from the predetermined portion Sa in the sample S is exposed in each pixel row 20b of the second imaging device 20. Is synchronized with the rolling readout of the second imaging device 20 (step S22). Then, focus information at the focus information acquisition position Q is calculated based on the contrast value peak of the image data acquired at each pixel row 20b (step S23).

  After the focal point information is calculated, it is determined whether or not the calculation of the focal point information has been completed for the desired imaging line Ln (step S24), and the acquisition of the focal point information has not been completed. First, the field of view V of the objective lens 15 moves to the next imaging line Ln (step S25), and the processes of steps S21 to S24 are repeatedly executed. When the acquisition of the in-focus information is completed, a focus map is created based on the in-focus information at each focus information acquisition position Q (step S26).

  As described above, in the image acquisition device M, the optical path length difference of the second optical image is formed by the arrangement of the optical path difference generating member 21 without branching the light in the second optical path L2 for focus control. it can. Therefore, the amount of light to the second optical path L2 necessary for obtaining the focal position information is suppressed, and the amount of light when the first imaging device 18 performs imaging can be secured. Further, in the image acquisition device M, due to the arrangement of the optical path difference generation member 21, a first incident region 24A where the second optical image in which the optical path difference has occurred is incident on the imaging surface 20a of the second imaging device 20, and A second incident region 24B on which a second light image that does not cause an optical path difference is formed. Accordingly, the in-focus calculation unit 37 can select which of the second image incident on the first incident area 24A and the second image incident on the second incident area 24B is used for the analysis. Thus, for example, since an optimum focal point information calculation method can be selected according to the degree of unevenness of the sample S, the focal point information can be calculated with high accuracy.

  The image acquisition device M further includes an image evaluation unit 41 that evaluates the first image acquired by the first imaging device 18. The in-focus calculation unit 37 performs dynamic prefocusing by analyzing the second image acquired in the first incident region 24A when the contrast value of the first image by the image evaluation unit 41 is equal to or greater than a predetermined threshold. When the contrast value of the first image by the image evaluation unit 41 is less than a predetermined threshold value, a focus map is created by analyzing the second image acquired in the first incident area 24B. Select which of the second image acquired in the first incident area 24A and the second image acquired in the second incident area 24B is used for the analysis based on the evaluation result of the first image By doing so, the accuracy of the in-focus information can be secured.

  The image acquisition apparatus M further includes a stage driving unit 34 that moves the visual field position of the objective lens 15 with respect to the sample S, and the optical path difference generating member 21 is accompanied by the movement of the visual field position of the objective lens 15 by the stage driving unit 34. A first member 22 whose thickness continuously changes along the moving direction of the second optical image on the imaging surface 20a of the second imaging device 20, and a second member 23 having a constant thickness. The first member 22 forms a first incident region 24A on the imaging surface 20a, and the second member 23 forms a second incident region 24B on the imaging surface 20a. Accordingly, the first incident region 24A and the second incident region 24B can be formed on the imaging surface 20a of the second imaging device 20 with a simple configuration of the optical path difference generating member 21.

  Further, in the image acquisition device M, the second imaging device 20 has a plurality of pixel rows 20b and a two-dimensional imaging device capable of rolling readout, and the stage drive unit 34 moves the visual field position of the objective lens 15. A second image is acquired by performing rolling readout of each pixel row 20b in synchronization. Since the optical path difference generating member 21 is arranged in the second optical path L2, the image data from each pixel row 20b is equivalent to the case where the focal position of the objective lens 15 is changed in the same part of the sample S. Contrast information is included. Therefore, the in-focus information can be calculated quickly and accurately based on the contrast information. Even when the focus map is created, it is not necessary to move the focal position of the objective lens 15, so that the in-focus information at the focal information acquisition position Q can be quickly calculated.

  The present invention is not limited to the above embodiment. For example, in the above embodiment, the optical path difference generating member is formed by combining the first member 22 whose thickness continuously changes along the moving direction of the second optical image and the second member 23 having a constant thickness. 21, the optical path difference generating member has a first incident area 24 </ b> A where the second optical image in which the optical path difference has occurred is incident on the imaging surface 20 a of the second imaging device 20, and the optical path difference is Other forms may be used as long as the second incident region 24B into which the second optical image that does not occur is incident is formed. For example, as shown in FIG. 19A, an optical path difference generating member 51 composed only of the first member 22 may be used. Also in this case, as shown in FIG. 19B, the first incident region 24A corresponding to the first member 22 is formed on the imaging surface 20a of the second imaging device 20 to generate the optical path difference. A second incident region 24B is formed corresponding to a portion where the member 51 is not disposed.

  In the above embodiment, the first incident region 24A is provided in the upper half region of the imaging surface 20a along the moving direction (Z direction) of the second optical image on the imaging surface 20a accompanying the scanning of the sample S. The second incident region 24B is formed in the lower half region, but the positions where the first incident region 24A and the second incident region 24B are formed on the imaging surface 20a are not limited to this. For example, as in the optical path difference generating member 61 shown in FIG. 20A, a first member 62 having a prism shape with a right-angled cross section and a flat plate-like second member 63 are combined in the longitudinal direction of the pixel row 20b. May be. In this case, as shown in FIG. 20B, the first incident region 24A is formed in the left half region of the imaging surface 20a, and the second incident region 24B is formed in the right half region of the imaging surface 20a. Even in such a configuration, the same effects as those of the above embodiment can be obtained.

  DESCRIPTION OF SYMBOLS 1 ... Stage, 12 ... Light source, 14 ... Light guide optical system, 15 ... Objective lens, 16 ... Beam splitter (light branching means), 18 ... 1st imaging device (1st imaging means), 20 ... 2nd Imaging device (second imaging means), 20a ... imaging surface, 20b ... pixel array, 21, 51, 61 ... optical path difference generating member, 22,62 ... first member (first part), 23,63 ... 2nd member (2nd part), 24A ... 1st incident area | region, 24B ... 2nd incident area | region, 34 ... Stage drive part (visual field drive means), 36 ... Operation control part (operation control means), 37 In-focus calculation unit (in-focus calculation means) 40 Image evaluation unit (image evaluation means) L1 First optical path L2 Second optical path M Image acquisition device M1 Macro image acquisition device M2 ... Micro image acquisition device, S ... Sample, Sa ... Predetermined part, V ... Field of view of objective lens

Claims (8)

  1. A stage on which the sample is placed;
    A light source that emits light toward the sample;
    An objective lens disposed so as to face the sample on the stage, and a light branching unit that branches the optical image of the sample into a first optical path for image acquisition and a second optical path for focus control. Optical optics,
    First imaging means for acquiring a first image by a first optical image branched into the first optical path;
    A second imaging means for acquiring a second image by a second optical image branched into the second optical path;
    In-focus calculation means for analyzing the second image and calculating in-focus information of the sample based on the analysis result;
    An optical path difference generating member that generates an optical path difference in the second optical image along the in-plane direction of the imaging surface of the second imaging means,
    The optical path difference generating member includes a first incident area where the second optical image in which an optical path difference has occurred is incident on an imaging surface of the second imaging means, and the second optical image in which no optical path difference is generated. Is disposed in the second optical path so as to form a second incident region on which is incident,
    The in-focus calculation means uses the second image acquired in the first incident area and the second image acquired in the second incident area for the analysis. An image acquisition apparatus, wherein the second image is selected as the second image.
  2.   The optical path difference generating member and the second imaging unit are arranged in the second optical path so that light from the imaging position of the first imaging unit in the sample is incident on the second incident region. The image acquisition apparatus according to claim 1, wherein:
  3. Image evaluation means for evaluating the first image acquired by the first imaging means;
    The focus calculation means is acquired in the second image acquired in the first incident area and the second incident area based on the evaluation result of the first image by the image evaluation means. 3. The image acquisition apparatus according to claim 1, wherein any one of the second images is selected.
  4. Visual field driving means for moving the visual field position of the objective lens relative to the sample;
    The optical path difference generating member has a continuous thickness along the moving direction of the second optical image on the imaging surface of the second imaging unit as the visual field driving unit moves the field position of the objective lens. A first portion having a variable thickness and a second portion having a constant thickness,
    The first portion forms the first incident region on the imaging surface, and the second portion forms the second incident region on the imaging surface. The image acquisition apparatus as described in any one of Claims.
  5.   The second imaging means has a two-dimensional imaging device having a plurality of pixel rows and capable of rolling readout, and rolling each pixel row in synchronization with the movement of the visual field position of the objective lens by the visual field driving means. The image acquisition apparatus according to claim 4, wherein the second image is acquired by performing reading.
  6.   In the case where the second image acquired in the first incident region is used for analysis, the in-focus calculation unit reads out at least two pixel columns out of the pixel columns of the two-dimensional image sensor. The image acquisition apparatus according to claim 5, wherein the focal point information of the sample is calculated based on a difference in contrast value of the obtained image data.
  7.   The focus calculation means creates a focus map based on the calculated focus information when using the second image acquired in the second incident region for analysis. 5. The image acquisition device according to 5.
  8. A stage on which the sample is placed;
    A light source that emits light toward the sample;
    An objective lens disposed so as to face the sample on the stage, and a light branching unit that branches the optical image of the sample into a first optical path for image acquisition and a second optical path for focus control. Optical optics,
    First imaging means for acquiring a first image by a first optical image branched into the first optical path;
    A second imaging means for acquiring a second image by a second optical image branched into the second optical path;
    In-focus calculation means for analyzing the second image and calculating in-focus information of the sample based on the analysis result;
    An optical path difference generating member that generates an optical path difference in the second optical image along an in-plane direction of an imaging surface of the second imaging unit,
    A first incident area where the second optical image in which an optical path difference has occurred is incident on an imaging surface of the second imaging means, and a second incident in which the second optical image without an optical path difference is incident. The optical path difference generating member is disposed in the second optical path so that a region is formed,
    One of the second image acquired in the first incident area and the second image acquired in the second incident area is used as the second image used for analysis. A focus method for an image acquisition apparatus, characterized by causing a calculation means to select.
JP2013122960A 2013-06-11 2013-06-11 Image acquisition device and focus method of image acquisition device Active JP6010506B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013122960A JP6010506B2 (en) 2013-06-11 2013-06-11 Image acquisition device and focus method of image acquisition device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013122960A JP6010506B2 (en) 2013-06-11 2013-06-11 Image acquisition device and focus method of image acquisition device
PCT/JP2014/055988 WO2014174920A1 (en) 2013-04-26 2014-03-07 Image acquisition device and focusing method for image acquisition device

Publications (2)

Publication Number Publication Date
JP2014240888A true JP2014240888A (en) 2014-12-25
JP6010506B2 JP6010506B2 (en) 2016-10-19

Family

ID=52140176

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2013122960A Active JP6010506B2 (en) 2013-06-11 2013-06-11 Image acquisition device and focus method of image acquisition device

Country Status (1)

Country Link
JP (1) JP6010506B2 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005114293A1 (en) * 2004-05-24 2005-12-01 Hamamatsu Photonics K.K. Microscope
JP2008507719A (en) * 2004-07-23 2008-03-13 ジーイー・ヘルスケア・ナイアガラ・インク Confocal fluorescence microscopy and equipment
WO2012002893A1 (en) * 2010-06-30 2012-01-05 Ge Healthcare Bio-Sciences Corp A system for synchronization in a line scanning imaging microscope
JP2012073285A (en) * 2010-09-27 2012-04-12 Olympus Corp Imaging method and microscope device
JP2012108184A (en) * 2010-11-15 2012-06-07 Sony Corp Focal position information detector, microscope device and focal position information detection method
JP2012138068A (en) * 2010-12-08 2012-07-19 Canon Inc Image generation apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005114293A1 (en) * 2004-05-24 2005-12-01 Hamamatsu Photonics K.K. Microscope
JP2008507719A (en) * 2004-07-23 2008-03-13 ジーイー・ヘルスケア・ナイアガラ・インク Confocal fluorescence microscopy and equipment
JP2012212155A (en) * 2004-07-23 2012-11-01 Ge Healthcare Niagara Inc Method and apparatus for fluorescent confocal microscopy
WO2012002893A1 (en) * 2010-06-30 2012-01-05 Ge Healthcare Bio-Sciences Corp A system for synchronization in a line scanning imaging microscope
JP2012073285A (en) * 2010-09-27 2012-04-12 Olympus Corp Imaging method and microscope device
JP2012108184A (en) * 2010-11-15 2012-06-07 Sony Corp Focal position information detector, microscope device and focal position information detection method
JP2012138068A (en) * 2010-12-08 2012-07-19 Canon Inc Image generation apparatus

Also Published As

Publication number Publication date
JP6010506B2 (en) 2016-10-19

Similar Documents

Publication Publication Date Title
TWI310457B (en) Method and apparatus for detection of wafer defects
US9134521B2 (en) Multidirectional selective plane illumination microscopy
EP2023611A2 (en) Optical inspection tool featuring multiple speed modes
US20110298914A1 (en) Microscope system
US7456377B2 (en) System and method for creating magnified images of a microscope slide
US20080266652A1 (en) Microscope with dual image sensors for rapid autofocusing
KR101975081B1 (en) Method and apparatus for high speed acquisition of moving images using pulsed illumination
JP5852527B2 (en) Three-dimensional shape measuring method and substrate inspection method
EP2813803B1 (en) Machine vision inspection system and method for performing high-speed focus height measurement operations
EP1210638A1 (en) Method/system measuring object features with 2d and 3d imaging coordinated
JP5934940B2 (en) Imaging apparatus, semiconductor integrated circuit, and imaging method
WO2006098443A1 (en) Microscopic image capturing device
JP4041854B2 (en) Imaging apparatus and photomask defect inspection apparatus
JP2012073285A (en) Imaging method and microscope device
JP2011081211A (en) Microscope system
JPH10290389A (en) Multi-focus image formation method and image formation device
EP1613062B1 (en) Electronic camera and automatic focusing method
JP2005275199A (en) Three-dimensional confocal microscopic system
DE102013226588A1 (en) Recording device
WO2000057231A1 (en) Scanning confocal microscope
US20100314533A1 (en) Scanning microscope and method of imaging a sample
KR20130111327A (en) Method for alignment of optical axis of laser and laser processing device using the same
JP2006145793A (en) Microscopic image pickup system
US20010024280A1 (en) Shape measuring apparatus
EP2698658A1 (en) Image pickup apparatus, semiconductor integrated circuit and image pickup method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20160530

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20160913

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20160916

R150 Certificate of patent or registration of utility model

Ref document number: 6010506

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150