WO2021161432A1 - Microsope system, setting search method, and program - Google Patents

Microsope system, setting search method, and program Download PDF

Info

Publication number
WO2021161432A1
WO2021161432A1 PCT/JP2020/005412 JP2020005412W WO2021161432A1 WO 2021161432 A1 WO2021161432 A1 WO 2021161432A1 JP 2020005412 W JP2020005412 W JP 2020005412W WO 2021161432 A1 WO2021161432 A1 WO 2021161432A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
spherical aberration
microscope system
setting
pixel
Prior art date
Application number
PCT/JP2020/005412
Other languages
French (fr)
Japanese (ja)
Inventor
伸吾 鈴木
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Priority to JP2021577774A priority Critical patent/JP7369801B2/en
Priority to PCT/JP2020/005412 priority patent/WO2021161432A1/en
Publication of WO2021161432A1 publication Critical patent/WO2021161432A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to a microscope system, a setting search method for searching a setting for correcting spherical aberration, and a program.
  • the correction ring of the microscope system has been used as a means for correcting spherical aberration caused by the thickness of the cover glass.
  • a correction ring is also used for the purpose of correcting spherical aberration that changes according to the depth of an observation target surface. Such techniques are described, for example, in Patent Documents 1 to 3.
  • the amount of spherical aberration generated inside the sample depends on the refractive index distribution of the sample. Therefore, if the position of the correction ring in which the spherical aberration is corrected can be known, the refractive index of the sample can be calculated back.
  • Such a technique is described in, for example, Patent Document 2 and Patent Document 3.
  • Whether or not the spherical aberration is corrected can be determined based on an evaluation value (hereinafter referred to as a contrast value) that evaluates the contrast of an image, as described in Patent Documents 1 to 3. .. This is because in the state where the spherical aberration is corrected, an image having higher contrast can be obtained as compared with the state where the spherical aberration is not corrected.
  • a contrast value an evaluation value that evaluates the contrast of an image
  • an image having a high S / N ratio By the way, in order to accurately determine whether or not spherical aberration is corrected, it is desirable to calculate the contrast value using an image having a high S / N ratio. This is because in an image having a low S / N ratio, the influence of noise irrelevant to the amount of spherical aberration becomes relatively large, and it is difficult to accurately grasp the change in contrast due to spherical aberration.
  • An image having a high S / N ratio can be obtained, for example, by increasing the intensity of the illumination light and increasing the signal component of the image with respect to the noise component.
  • the intensity of the illumination light becomes too strong, the proportion of saturated pixels contained in the image will increase. As the proportion of saturated pixels increases, the contrast value may be calculated lower than it actually is, and as a result, the inaccurate position may be recognized as the position of the correction ring in which spherical aberration is corrected. .. Further, even if the intensity of the illumination light is adjusted in a state where the optimum position of the correction ring for which spherical aberration is corrected is not determined, the brightness can be further increased by changing the setting of the correction ring in the process of adjusting the correction ring. It may change, resulting in an increase in the proportion of saturated pixels. That is, the same problem can occur even if the intensity of the illumination light is properly adjusted before the correction ring adjustment is started.
  • the correction ring has been described as an example, but the same technical problem may occur if the correction device is not limited to the correction ring and corrects spherical aberration.
  • one object of the present disclosure is to provide a technique for accurately specifying a setting in which spherical aberration is corrected even when saturated pixels occur.
  • the microscope system has a spherical aberration correction device, and acquires a plurality of image data by acquiring image data in each of a plurality of states in which the settings of the spherical aberration correction device are different.
  • an arithmetic device that specifies the setting of the spherical aberration correction device that corrects spherical aberration based on a plurality of contrast values including the contrast values of the plurality of image data.
  • the position of the pixel data in which the pixel values included in the plurality of image data is saturated is specified as the first position, and the plurality of image data are excluded from the pixel data of the first position included in each of the plurality of image data.
  • Each of the contrast values is calculated.
  • the setting search method is a setting search method for searching for a setting in which spherical aberration is corrected, and acquires image data in each of a plurality of states in which the settings of the spherical aberration correction device are different. Acquires a plurality of image data in The spherical aberration correction in which the contrast value of each of the plurality of image data is calculated excluding the pixel data of the above, and the spherical aberration is corrected based on the plurality of contrast values including the contrast value of each of the plurality of image data. Identify the device settings.
  • the program according to the embodiment of the present invention specifies to the computer the position of the pixel data saturated with the pixel values included in the plurality of image data acquired with different settings of the spherical aberration correction device as the first position.
  • the contrast value of each of the plurality of image data is calculated excluding the pixel data of the first position included in each of the plurality of image data, and the plurality of including the contrast value of each of the plurality of image data.
  • FIG. 1 is a diagram illustrating the configuration of the microscope system 1 according to the present embodiment.
  • FIG. 2 is a diagram illustrating the configuration of the arithmetic unit 20 shown in FIG.
  • FIG. 3 is a diagram illustrating the configuration of the microscope 100 shown in FIG.
  • the microscope system 1 shown in FIG. 1 includes a microscope 100, a microscope control device 10, an arithmetic unit 20, a display device 30, and a plurality of input devices (keyboard 40, correction ring) for inputting instructions to the arithmetic unit 20.
  • the operation device 50 and the focusing operation device 60) are provided.
  • the microscope 100 and the microscope control device 10 will be collectively referred to as a microscope device.
  • the microscope control device 10 is a device that controls the operation of the microscope 100 according to an instruction from the arithmetic unit 20, and generates control signals for controlling the operation of various electric parts of the microscope 100. In addition, image data is generated based on the signal from the microscope 100.
  • the microscope control device 10 includes a light source control device 11 that controls the output of the light source, a zoom control device 12 that controls the zoom magnification, and a position of the observation target surface in the optical axis direction (hereinafter, simply referred to as the position of the observation target surface). ), And a correction ring control device 14 for controlling the setting of the correction ring 111.
  • the correction ring 111 is an example of a spherical aberration correction device that corrects spherical aberration
  • the correction ring control device 14 is an example of a correction control device.
  • the setting of the correction ring 111 is, for example, the rotation angle of the correction ring 111 with respect to the reference position (hereinafter, simply referred to as the angle of the correction ring 111).
  • the arithmetic unit 20 is a computer that performs various arithmetic processes.
  • the processor 21, the memory 22, the input I / F device 23, the output I / F device 24, and the portable recording medium 26 are included.
  • a portable recording medium drive device 25 to be inserted is provided, and these are connected to each other by a bus 27.
  • FIG. 2 is an example of the configuration of the arithmetic unit 20, and the arithmetic unit 20 is not limited to this configuration.
  • Processor 21 includes one or more processors.
  • the one or more processors may include, for example, a central processing unit (CPU: Central Processing Unit), a GPU (Graphics Processing Unit), a DSP (Digital Signal Processor), and the like. Further, ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array) and the like may be included.
  • the processor 21 may, for example, execute a predetermined software program to perform arithmetic processing.
  • the memory 22 includes a non-temporary computer-readable medium that stores a software program executed by the processor 21.
  • the memory 22 may include, for example, one or more arbitrary semiconductor memories, and may further include one or more other storage devices.
  • the semiconductor memory includes, for example, a volatile memory such as a RAM (Random Access Memory), a non-volatile memory such as a ROM (Read Only Memory), a programmable ROM, and a flash memory.
  • the RAM may include, for example, a DRAM (Dynamic Random Access Memory), a SRAM (Static Random Access Memory), or the like.
  • Other storage devices may include, for example, a magnetic storage device including a magnetic disk, for example, an optical storage device including an optical disk.
  • the input I / F device 23 receives signals from the keyboard 40, the correction ring operation device 50, the focusing operation device 60, and the display device 30.
  • the input I / F device 23 also receives a signal from the microscope 100.
  • the output I / F device 24 outputs a signal to the display device 30 and the microscope control device 10.
  • the portable recording medium driving device 25 accommodates the portable recording medium 26.
  • the arithmetic unit 20 operates as various means by the processor 21 reading the program stored in the memory 22 or the portable recording medium 26 into the memory 22 and executing the program.
  • the calculation device 20 uses, for example, a means for calculating the contrast value of the image data (contrast calculation means), a means for calculating the setting of the correction ring 111 for which spherical aberration is corrected (target value calculation means), and a refractive index of the sample S. It operates as a means for calculating (refractive index calculation means) and a means for controlling the display device 30 (display control means).
  • the display device 30 is, for example, a liquid crystal display device, an organic EL display device, a CRT display device, or the like.
  • the display device 30 may include a touch panel sensor, and in that case, also functions as an input device.
  • the correction ring operation device 50 is an input means for instructing the setting of the correction ring 111.
  • the correction ring control device 14 changes the setting of the correction ring 111 to the instructed setting.
  • the focusing operation device 60 is an input means for instructing a change in the position (that is, the observation depth) of the observation target surface.
  • the focusing control device 13 moves the focusing device 109 in the optical axis direction to change the position of the observation target surface.
  • the microscope 100 is, for example, a two-photon excitation microscope.
  • Sample S is, for example, a biological sample such as mouse brain, but is not limited to the biological sample.
  • the microscope 100 includes a laser 101, a scanning unit 102, a pupil projection optical system 103, a mirror 104, a dichroic mirror 105, and an objective lens 110 on the illumination optical path.
  • the laser 101 is, for example, an ultrashort pulse laser, which oscillates laser light in the near infrared region.
  • the output of the laser 101 is controlled by the light source control device 11. That is, the light source control device 11 is a laser control device that controls the output of the laser light irradiating the sample.
  • the scanning unit 102 is a scanning means for scanning the sample S two-dimensionally with a laser beam, and includes, for example, a galvano scanner and a resonant scanner.
  • the zoom magnification changes as the scanning range of the scanning unit 102 changes.
  • the scanning range of the scanning unit 102 is controlled by the zoom control device 12.
  • the pupil projection optical system 103 is an optical system that projects the image of the scanning unit 102 onto the pupil position of the objective lens 110.
  • the dichroic mirror 105 is an optical separation means for separating the excitation light (laser light) and the detection light (fluorescence) from the sample S, and separates the laser light and the fluorescence according to the wavelength.
  • the objective lens 110 is a dry or immersion objective lens provided with a correction ring 111, and is attached to the focusing device 109.
  • the focusing device 109 is a means for moving the objective lens 110 in the optical axis direction of the objective lens 110, and the movement of the focusing device 109 (that is, the movement of the objective lens 110) is controlled by the focusing control device 13. ..
  • the correction ring 111 is a correction device that corrects spherical aberration by moving a part of the lenses constituting the objective lens 110 in the optical axis direction according to the setting.
  • the setting of the correction ring 111 is changed by the correction ring control device 14 (correction device control device).
  • the setting of the correction ring 111 can also be changed manually by directly operating the correction ring 111.
  • the microscope 100 further includes a pupil projection optical system 106 and a photodetector 107 on the detection optical path (reflected optical path of the dichroic mirror 105).
  • the signal output from the photodetector 107 is output to the A / D converter 108.
  • the pupil projection optical system 106 is an optical system that projects the pupil image of the objective lens 110 onto the photodetector 107.
  • the photodetector 107 is, for example, a photomultiplier tube (PMT), which outputs an analog signal according to the amount of incident fluorescence.
  • the A / D converter 108 converts the analog signal from the photodetector 107 into a digital signal (luminance signal) and outputs it to the microscope control device 10.
  • the sensitivity of the photodetector 107 is controlled by a detection sensitivity control device (not shown).
  • the detection sensitivity control is, for example, any one of adjustment of the applied voltage applied to the photodetector 107, adjustment of the amplification factor of the analog signal output from the photodetector 107, adjustment of the amplification factor at the stage of the digital signal, or , A combination of them.
  • the microscope 100 uses the scanning unit 102 to scan the sample S with laser light in a direction orthogonal to the optical axis of the objective lens 110, and from each position of the sample S.
  • the fluorescence is detected by the photodetector 107.
  • the microscope control device 10 generates image data based on the analog signal sampled by the A / D converter 108 in accordance with the scanning timing of the scanning unit 102, and outputs the image data to the arithmetic unit 20. That is, the image data is acquired by a microscope device including the microscope 100 and the microscope control device 10.
  • FIG. 4 is a flowchart of the correction ring setting process according to the first embodiment.
  • FIG. 5 is a flowchart of the saturated pixel position specifying process.
  • FIG. 6 is a flowchart of the contrast calculation process.
  • the correction ring setting process shown in FIG. 4 is an example of a setting search method for searching for a setting in which spherical aberration is corrected. For example, a user operates a focusing operation device 60 to determine the position of an observation target surface. Started by specifying.
  • the microscope system 1 accepts the designation of the range (hereinafter referred to as the region of interest) to be the target of the contrast evaluation in the sample S (step S1).
  • the designation of the region of interest by the observer has, for example, a portion to be observed better (for example, a characteristic shape in the sample S) while the observer is viewing the live image of the sample S displayed on the display device 30. Part) is included by using an input device such as a keyboard 40.
  • step S1 may be omitted.
  • the microscope system 1 changes the setting of the correction ring 111 to the initial setting (step S2).
  • the arithmetic unit 20 controls the correction ring control device 14 to set the angle of the correction ring 111 to, for example, the angle of one end of the movable range.
  • the microscope system 1 acquires the image data (step S3) and determines whether or not the image data has been acquired with all the predetermined correction ring 111 settings (step S4).
  • the microscope system 1 changes the settings of the correction ring 111 (step S5), returns to step S3, and re-uses the image data. get.
  • the microscope system 1 acquires the image data (step S3) and changes the settings of the correction ring 111 (step S5) until the image data is acquired with all the predetermined correction ring 111 settings (step S4YES). )repeat.
  • the arithmetic unit 20 controls the correction ring control device 14 to move the angle of the correction ring 111 from the initial position, which is one end of the movable range, by a predetermined angle to the other end of the movable range.
  • the microscope control device 10 acquires image data at each angle of the correction ring 111 and outputs the image data to the calculation device 20, so that the calculation device 20 is the same. Acquire a plurality of image data for the observation target surface.
  • the microscope device acquires a plurality of image data by acquiring image data in each of a plurality of states in which the setting of the correction ring 111 is different, and outputs the plurality of image data to the arithmetic unit 20.
  • the arithmetic unit 20 stores a plurality of image data acquired from the microscope device in the memory 22.
  • the microscope system 1 identifies the positions of saturated pixels included in the image data (step S6).
  • the arithmetic unit 20 starts, for example, the saturated pixel position specifying process shown in FIG. 5, and specifies the position of the pixel data in which the pixel values included in the plurality of image data are saturated as the first position.
  • the arithmetic unit 20 reads one image data from the plurality of image data stored in the memory 22 (step S11), and does the read image data include pixel data with saturated pixel values? Whether or not it is determined (step S12). Then, when pixel data in which the pixel value is saturated is included, the position of the pixel of the pixel data (hereinafter referred to as saturated pixel) is stored as the first position (step S13).
  • the arithmetic unit 20 performs the processes of steps S11 to S13 for all of the plurality of image data stored in the memory 22 (step S14YES), and ends the saturated pixel position specifying process shown in FIG.
  • the microscope system 1 calculates the contrast value of each of the plurality of image data (step S7).
  • the arithmetic unit 20 starts, for example, the contrast calculation process shown in FIG. 6 and calculates a plurality of contrast values from the plurality of image data.
  • the arithmetic unit 20 reads the first position, which is the position of the saturated pixel, from the memory 22 (step S21). Further, the arithmetic unit 20 reads one image data from the plurality of image data stored in the memory 22 (step S22), and uses the image data and the first position to set the contrast value of the read image data. Calculate (step S23). The details of the method of calculating the contrast value will be described later. The arithmetic unit 20 performs the processes of steps S22 and S23 on all of the plurality of image data stored in the memory 22 (step S24YES), and ends the contrast calculation process shown in FIG.
  • the microscope system 1 specifies a setting for correcting spherical aberration as a target setting (step S8). ..
  • the arithmetic unit 20 specifies the setting of the correction ring 111 for which the spherical aberration is corrected, that is, the angle of the correction ring 111, based on the plurality of contrast values calculated in step S7. More specifically, the arithmetic unit 20 specifies, for example, the maximum contrast value among a plurality of contrast values, and sets a target for the angle of the correction ring 111 when the image data having the maximum contrast value is acquired. May be specified as. Further, the angle of the correction ring 111 that maximizes the contrast value may be estimated from the combination of the plurality of contrast values and the angle of the correction ring 111 corresponding thereto, and the estimated angle may be specified as the target setting.
  • the microscope system 1 changes the setting of the correction ring 111 to the target setting (step S9), and ends the correction ring setting process.
  • the correction ring control device 14 changes the setting of the correction ring 111 to the setting specified in step S8 according to the instruction of the arithmetic unit 20. As a result, spherical aberration is corrected, so that the sample S can be observed with good image quality.
  • FIG. 7 is a diagram for explaining a conventional method of calculating a contrast value.
  • FIG. 8 is a diagram for explaining an example of the method of calculating the contrast value in the present embodiment.
  • the contrast calculation method performed by the microscope system 1 will be described with reference to FIGS. 7 and 8, focusing on points different from the conventional contrast calculation method.
  • an evaluation formula for evaluating the contrast of an image As one of the evaluation formulas for evaluating the contrast of an image, an evaluation formula for evaluating the contrast of an image based on the difference in pixel values between pixels is known.
  • the following formula for calculating the contrast value by integrating the square of the difference between the pixel values of two pixels located at positions shifted by n pixels in the x direction over the entire image data has been done.
  • the following formula is J. F. It is an evaluation formula proposed by Brenner et al. And is called a Brenner gradient.
  • FBrenner is a contrast value
  • x is a variable that specifies a column of pixels that make up an image
  • y is a variable that specifies a row of pixels that make up an image.
  • W is the number of pixels in the x direction (that is, the number of columns) of the pixels that make up the image
  • H is the number of pixels in the y direction (that is, the number of rows) of the pixels that make up the image.
  • f is a pixel value of pixel data corresponding to the specified pixel.
  • n is a shift amount, and is an integer (for example, 2 or the like) indicating an interval between pixels for which the difference between pixel values is calculated.
  • An evaluation formula that evaluates the contrast of an image based on the difference in pixel values between pixels such as Brenner gradient, is more than an evaluation formula that evaluates the contrast of an image from the maximum pixel value and the minimum pixel value included in the image data.
  • the contrast change that occurs in the image can be captured in detail. Therefore, it is suitable for evaluating the corrected state of spherical aberration.
  • the conventional contrast calculation method using the Brener gradient may make a mistake in setting the target. This is because when the number of saturated pixels increases, the difference in pixel values between saturated pixels becomes 0, for example, and the difference in pixel values between pixels including saturated pixels is estimated to be smaller than the actual value. As a result, the calculated contrast value falls below the actual contrast value.
  • FIG. 7 pixel data included in the image data acquired when the angle ⁇ of the correction ring 111 is 0 °, 10 °, and 20 ° is schematically drawn.
  • an image composed of pixels arranged in 1 row and 6 columns is drawn for simplification of the description, but the number of rows and columns of the pixels constituting the image is not limited to this example.
  • the conventional calculation method may cause the target setting to be incorrect. obtain.
  • the above problem is solved by paying attention to the fact that even if the images are compared by excluding a part of the pixel data included in the image data, the relationship of brightness between the images is not significantly affected. Propose a technology to do.
  • the arithmetic unit 20 excludes the pixel data at the first position included in the image data read in step S22 from the calculation for calculating the contrast value of the read image data. do. That is, the contrast value of the image data is calculated by excluding the pixel data at the first position included in the image data. More specifically, as shown in FIG. 8, when calculating the contrast value of each of the plurality of image data, the position of the pixel data in which the pixel value is saturated in at least one of the plurality of image data (the first position). 1 position) is specified, and the pixel data corresponding to the first position is excluded from the calculation of the contrast value of each image data regardless of whether the pixel value of the corresponding pixel data is saturated or not. do.
  • the contrast calculation method performed by the microscope system 1 is the same as the conventional contrast calculation method in that an evaluation formula for evaluating the contrast of an image based on the difference in pixel values between pixels is used.
  • FIG. 8 schematically depicts pixel data included in the image data acquired when the angles ⁇ of the correction ring 111 are 0 °, 10 °, and 20 °.
  • the image data is 8-bit image data
  • pixels including at least one of the third and fourth pixels from the left are excluded from the integration target for calculating the contrast value, and the contrast value is calculated.
  • spherical aberration can be appropriately corrected even when saturated pixels are generated, so that the output of the light source is adjusted before starting the process of searching for the target setting shown in FIG.
  • Brightness adjustment such as sensitivity adjustment of the detector and detector becomes easy. That is, the user can adjust the brightness to obtain a good S / N ratio without being excessively concerned about the generation of saturated pixels.
  • the setting for correcting the spherical aberration can be accurately specified.
  • the position of the saturated pixel is specified as the first position and the contrast value of each image data is calculated by excluding the pixel data of the pixel at the first position.
  • Pixel data may be excluded from the calculation.
  • the arithmetic unit 20 specifies the position of the pixel data whose pixel value is less than the threshold value included in the plurality of image data acquired for the same observation target surface as the second position.
  • Each of the plurality of contrast values may be calculated excluding the pixel data of the first position and the second position included in each of the plurality of image data. This point is the same in the subsequent embodiments. It is desirable that this threshold value is determined based on, for example, a pixel value (hereinafter referred to as background luminance) corresponding to a signal output from a photodetector in a state where no light is incident.
  • background luminance a pixel value corresponding to a signal output from a photodetector in a state where no light is incident.
  • the pixel data having a pixel value less than the threshold value determined based on the background brightness corresponds to the part of the image where nothing is displayed. Therefore, by excluding pixel data having a pixel value less than the threshold value from the calculation of the contrast value, the contrast value can be calculated while suppressing the influence of background noise. Therefore, by specifying both the first position and the second position and excluding the pixel data at those positions from the calculation of the contrast value, it is possible to more accurately specify the setting in which the spherical aberration is corrected. .. Explaining in detail with reference to FIG.
  • the contrast value is calculated. It may be excluded from the calculation target.
  • FIG. 10 is a flowchart of the correction ring setting process according to the present embodiment.
  • the microscope system according to the present embodiment (hereinafter, simply referred to as a microscope system) is different from the microscope system 1 in that the correction ring setting process shown in FIG. 10 is performed instead of the correction ring setting process shown in FIG. It's different. Other points are the same as those of the microscope system 1.
  • the correction ring setting process shown in FIG. 10 is different from the correction ring setting process shown in FIG. 4 in that the amount of light is automatically adjusted during the correction ring setting process.
  • the microscope system first acquires a plurality of image data by acquiring image data in each of a plurality of states in which the setting of the correction ring 111 is different in steps S31 to S36, and the plurality of image data.
  • the saturated pixel position included in is specified as the first position.
  • the microscope system determines whether or not the ratio of saturated pixels is the threshold value TH1 or more (step S37).
  • the threshold value TH1 is, for example, 0.5%.
  • step S37 when it is determined that the ratio of saturated pixels is equal to or greater than the threshold value TH1, the microscope system suppresses the amount of light emitted from the laser 101 (step S38), and further, the saturation specified in step S36.
  • the pixel position is reset (step S39). After that, the microscope system again performs the processes from step S32 to step S37. This is because the saturated pixels are generated above the threshold value TH1, and it is considered that the amount of illumination light is too strong.
  • step S38 the arithmetic unit 20 controls the light source control device 11, and the light source control device 11 follows an instruction from the arithmetic unit 20 to determine the ratio of pixel data in which the pixel values included in the plurality of image data are saturated (saturated pixels).
  • the output of the laser 101 is controlled according to the ratio of the laser 101. More specifically, the light source control device 11 reduces the output of the laser 101 when the ratio of saturated pixels is the threshold value TH1 or more.
  • the amount of decrease in the output of the laser 101 is not particularly limited, but is, for example, a predetermined constant amount.
  • the output of the laser 101 may be estimated so that the ratio of saturated pixels does not exceed the threshold value TH1, and the output of the laser 101 may be reduced so as to obtain the estimated output.
  • the reset process in step S39 is performed for the purpose of preventing the saturated pixel position specified before the light amount suppression from being confused with the saturated pixel position in the current light amount setting.
  • step S37 when it is determined that the ratio of saturated pixels is less than the threshold value TH1, the microscope system calculates the contrast value of each of the plurality of image data (step S40), and further corrects the spherical aberration.
  • the setting to be performed is specified as a target setting (step S41).
  • step S42 the microscope system changes the setting of the correction ring 111 to the target setting (step S42), and ends the correction ring setting process.
  • the processing of steps S40 to S42 is the same as the processing of steps S7 to S9 of FIG.
  • the microscope system according to the present embodiment can be obtained by the microscope system according to the present embodiment. If the proportion of pixels whose brightness is saturated in the microscope system 1 is too high, the accuracy of adjustment may decrease because there are few pixels that can be used for contrast calculation. Further, in the present embodiment, when the output of the laser 101 is too high, the amount of light is automatically adjusted to an appropriate level. According to this method, the proportion of pixels whose brightness is saturated can be reduced to increase the number of pixels that can be used for contrast calculation, so that the adjustment accuracy can be further improved as compared with the microscope system 1.
  • FIG. 11 is a flowchart of a modified example of the correction ring setting process shown in FIG.
  • FIG. 10 shows an example in which the necessity of light intensity adjustment is determined based on the ratio of saturated pixels after the images are acquired with all the settings of the correction ring 111, but the determination of the necessity of light intensity adjustment is arbitrary timing. For example, as shown in FIG. 11, the necessity of adjusting the amount of light may be determined each time the image data is acquired.
  • each time image data is acquired the saturated pixel position is specified and whether or not the ratio of saturated pixels is equal to or greater than the threshold value is determined (steps S54 and S55). It is different from the correction ring setting process shown in.
  • the processing content in each step is the same as the processing content in the corresponding step of the correction ring setting processing shown in FIG. Therefore, detailed description of each step will be omitted.
  • the state in which the amount of light needs to be adjusted is determined by specifying the saturated pixel position and determining whether or not the ratio of saturated pixels is equal to or higher than the threshold value each time image data is acquired. It can be detected early. Therefore, it is possible to suppress the number of image acquisitions with an inappropriate light amount setting, and it is possible to suppress the adverse effect on the sample caused by the image acquisition. Further, since the light amount setting is optimized in a shorter time, the total processing time required for the setting processing can be shortened.
  • the output of the laser light source is controlled (suppressed) in order to reduce the proportion of saturated pixels.
  • the sensitivity of the photodetector may be controlled (decreased) by a detection sensitivity control device (not shown).
  • "photodetector sensitivity suppression” may be performed in place of or in addition to "light amount suppression”. That is, in a microscope system, when the percentage of saturated pixels exceeds the threshold, both the output of the laser-light source and the sensitivity of the photodetector may be controlled, or only one of them may be controlled. ..
  • FIG. 12 is a flowchart of the relationship calculation process according to the present embodiment.
  • FIG. 13 is a diagram for explaining an example of a method of calculating the relationship between the setting of the correction ring and the observation depth.
  • FIG. 14 is a flowchart of the Z-Series shooting process according to the present embodiment.
  • a process of calculating the relationship between the setting of the correction ring 111 and the observation depth in the sample S and a process of performing Z-Series imaging using the calculated relationship will be described.
  • the Z-Series imaging is an imaging method in which acquisition of a two-dimensional image is repeated while moving the observation target surface by a predetermined distance in the depth direction of the sample S (that is, the optical axis direction of the objective lens 110). , Used to obtain 3D information of the sample.
  • Z-Series photography is also referred to as Z-stack photography.
  • the microscope system 1 When the relationship calculation process shown in FIG. 12 is started according to the instruction of the user, the microscope system 1 first focuses on the interface between the sample S and the stage (step S71).
  • the arithmetic unit 20 controls the focusing control device 13, and the focusing control device 13 focuses on the interface between the sample S and the stage.
  • This step can be performed by any known method.
  • the observation depth (position of the observation target surface) at this time is set to D0 and is used as the reference for the observation depth.
  • the microscope system 1 accepts the designation of the observation depth range (step S72).
  • the arithmetic unit 20 accepts the designation of the observation depth range by inputting the depth range that the user may observe as the observation depth range using an input device such as a keyboard 40. ..
  • the observation depth range may be specified at least at both ends of the observation depth range, and any depth within the observation depth may be specified in addition to both ends of the observation depth range. That is, in step S72, at least two observation depths are specified.
  • the microscope system 1 moves the observation target surface to the first observation depth (step S73).
  • the arithmetic unit 20 controls the focusing control device 13, and the focusing control device 13 transfers the observation target surface to, for example, the observation depth D1 which is one end of the observation depth range specified in step S72. Moving.
  • the microscope system 1 specifies the setting of the correction ring 111 in which the spherical aberration on the observation target surface at the observation depth D1 is corrected in steps S74 to S81. These processes are the same as the processes of steps S1 to S8 of FIG.
  • the microscope system 1 determines whether or not the setting is specified at all the observation depths specified in step S72 (step S82). Then, when there is an observation depth for which the setting is not specified (step S82NO), the microscope system 1 moves the observation target surface to the next observation depth (step S83), and moves to the observation target surface after the movement. On the other hand, the processes of steps S74 to S82 are repeated.
  • the arithmetic unit 20 controls the focusing control device 13, and the focusing control device 13 determines, for example, the observation depth D2 at which the observation target surface is the other end of the observation depth range specified in step S72. Move to.
  • the microscope system 1 calculates the relationship between the setting for correcting spherical aberration and the observation depth (step S84).
  • the arithmetic unit 20 uses the angle ⁇ 1 which is the target setting at the observation depth D1 specified in step 82 and the angle ⁇ 2 which is the target setting at the observation depth D2 specified in step 82 to form a spherical surface.
  • the relationship between the setting at which the aberration is corrected and the observation depth is calculated. Specifically, as shown in FIG.
  • a broken line is formed by linearly interpolating the point P1 specified by the observation depth D1 and the angle ⁇ 1 and the point P2 specified by the observation depth D2 and the angle ⁇ 2.
  • Calculate the relationship shown by an example of calculating the relationship by linear interpolation of two points is shown, but the relationship may be calculated from information of three or more points. In that case, for example, a function approximation such as the least squares method is used. The relationship may be calculated.
  • the microscope system 1 stores the calculated relationship (step S85) and ends the relationship calculation process shown in FIG.
  • the arithmetic unit 20 stores the calculated relationship in the memory 22.
  • the microscope system 1 first accepts the designation of the observation depth range to be captured in the Z-Series imaging (Ste S91).
  • the user inputs the range of the observation depth using an input device such as the keyboard 40, and the arithmetic unit 20 accepts the designation as the observation depth range.
  • the arithmetic unit 20 further determines a plurality of observation depths for acquiring image data within the observation depth range.
  • the plurality of observation depths may be distributed, for example, at predetermined intervals, and the predetermined intervals may be specified by the user together with the observation depth range in step S91.
  • the microscope system 1 moves the observation target surface to the first observation depth (step S92).
  • the arithmetic unit 20 controls the focusing control device 13, and the focusing control device 13 moves, for example, the observation target surface to one observation depth determined in step S91.
  • the microscope system 1 changes the setting of the correction ring 111 (step S93).
  • the arithmetic unit 20 first reads the relationship calculated in advance by the relationship calculation process shown in FIG. 12 from the memory 22, and calculates the target setting (angle of the correction ring 111) corresponding to the observation depth of the observation target surface. do.
  • the correction ring control device 14 changes the setting of the correction ring 111 to the calculated target setting according to the instruction of the arithmetic unit 20. As a result, spherical aberration is corrected.
  • the microscope system 1 acquires image data (step S94). As a result, it is possible to obtain an image in which spherical aberration is corrected. Further, the microscope system 1 determines whether or not the image data has been acquired at all the observation depths determined in step S91 (step S95), and if not, moves the observation target surface to the next observation depth. Move (step S96). Then, when the acquisition of the image data is completed at all the observation depths (step S95YES), the microscope system 1 ends the Z-Series imaging process shown in FIG.
  • the microscope system according to the present embodiment since the relationship between the observation depth and the target setting is calculated in advance, it is not necessary to acquire a plurality of image data in order to search for the target setting each time the observation depth is changed.
  • the relationship between the depth of observation and the target setting is also calculated using interpolation and function approximation from information at a relatively small number of observation depths. Therefore, the number of preliminary shots for exploring the target setting can be significantly reduced. Therefore, damage to the sample S can be suppressed. Further, it is possible to finish the Z-Series imaging process in a relatively short time while satisfactorily correcting the spherical aberration.
  • the third embodiment an example of calculating the relationship between the target setting and the observation depth in order to obtain a high-quality image in which the spherical aberration is corrected on the observation target surface at an arbitrary depth is shown.
  • the relationship of observation depth may be calculated for purposes other than image data acquisition.
  • the present embodiment is different from the third embodiment in that the relationship between the target setting and the observation depth is used to display the refractive index of the sample S.
  • FIG. 15 is a flowchart of the refractive index display process according to the present embodiment.
  • 16 and 17 are diagrams for explaining an example of a method of displaying the refractive index.
  • a process of displaying the refractive index using the relationship between the target setting and the observation depth will be described with reference to FIGS. 15 to 17.
  • the microscope system 1 is first calculated by the relationship calculation process shown in FIG.
  • the relationship is displayed (step S101).
  • the arithmetic unit 20 causes the display device 30 to display the graph shown in FIG. 16, for example.
  • the graph shown in FIG. 16 is an example in the case where a plurality of image data are acquired at each of the observation depths D1, D2, and D3 and the target setting at each depth is calculated in the relationship calculation process shown in FIG. Is.
  • the microscope system 1 accepts the designation of the observation depth at which the refractive index should be displayed (step S102).
  • the user selects the observation depth corresponding to the portion of the sample S for which the refractive index is desired by using the cursor C or the like on the graph displayed on the display device 30.
  • the arithmetic unit 20 detects the specified observation depth.
  • the microscope system 1 prepares the sample based on the information displayed on the display device 30, that is, the setting of the correction ring 111 for correcting the spherical aberration specified for each observation depth. Calculate the refractive index of S at the specified observation depth. Specifically, the microscope system 1 first calculates the set change rate in the depth direction (step S103), and further calculates the refractive index of the sample S at the observation depth specified in step S102 (step S104). ).
  • the refractive index of the medium eg, air or immersion liquid
  • step S104 the arithmetic unit 20 calculates the rate of change in the amount of spherical aberration from the set rate of change calculated in step S103, and further, based on the rate of change in the amount of spherical aberration and the refractive index of the medium, in the observation depth.
  • the refractive index of sample S is calculated.
  • the microscope system 1 displays the calculated refractive index (step S105).
  • the display device 30 displays the refractive index calculated by the arithmetic unit 20, for example, as shown in FIG. Note that FIG. 17 shows how the refractive index of the sample S at the observation depth specified by the user is superimposed and displayed on the graph shown in FIG.
  • the user can easily know the refractive index of the sample S on any surface regardless of the structure of the sample S.
  • the formula (1) is used as the evaluation formula in the calculation of the contrast value, but the evaluation formula may be any one that calculates the contrast value using the difference between the pixel values between the pixels.
  • the following equation may be used.
  • the contrast is stabilized in a wide spatial frequency domain. Can be evaluated. Therefore, the contrast of the image can be stably evaluated regardless of the frequency component included in the image, that is, regardless of the magnification of the sample or the optical system.
  • the pixel data of saturated pixels is processed. It may be used to calculate the contrast value. That is, the presence or absence of saturated pixels is determined for each line, and if saturated pixels are present, an approximate curve for the pixel value is calculated from the pixel data of the pixels around the saturated pixels in the line. Then, the pixel value of the pixel data of the saturated pixels when there is no gradation limitation due to the number of bits of the image is estimated based on the calculated approximate curve. In this way, by processing and using the pixel values included in the image data in the contrast calculation without using them as they are, it is possible to suppress a decrease in the accuracy of the contrast evaluation due to saturation.
  • Microscope system 10 Microscope control device 11
  • Light source control device 12 Zoom control device 13
  • Focus control device 14 Correction ring control device 20
  • Arithmetic device 21 Processor 22
  • Memory 26
  • Portable recording medium 30
  • Display device 40
  • Keyboard 50 Correction ring operation device
  • Focusing Operation device 100
  • Laser 102 Scanning unit
  • Light detector 108
  • a / D converter 109
  • Focusing device 110
  • Objective lens 111 Correction ring

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

A microscope system 1 is provided with a microscope device and a computing device 20. The microscope device has a correction collar 111, and acquires a plurality of pieces of image data by acquiring image data in each of a plurality of states having different settings of the correction collar 111. The computing device 20 specifies the setting of the correction collar 111 that is to be corrected for spherical aberration on the basis of a plurality of contrast values including the respective contrast values of the plurality of pieces of image data. The computing device 20 specifies, as a first position, the position of pixel data having saturated pixel values included in the plurality of pieces of image data, and except for the pixel data at the first position included in the plurality of image data, computes each of the plurality of contrast values.

Description

顕微鏡システム、設定探索方法、及び、プログラムMicroscope system, setting search method, and program
 本開示は、顕微鏡システム、球面収差が補正される設定を探索する設定探索方法、及び、プログラムに関する。 The present disclosure relates to a microscope system, a setting search method for searching a setting for correcting spherical aberration, and a program.
 従来から、顕微鏡システムの補正環は、カバーガラスの厚さに起因する球面収差を補正する手段として用いられている。サンプル(例えば、生体試料)の深部を観察する手法が開発された近年では、補正環は、観察対象面の深さに応じて変化する球面収差を補正する目的にも使用されている。このような技術は、例えば、特許文献1から特許文献3に記載されている。 Conventionally, the correction ring of the microscope system has been used as a means for correcting spherical aberration caused by the thickness of the cover glass. In recent years, when a method for observing a deep part of a sample (for example, a biological sample) has been developed, a correction ring is also used for the purpose of correcting spherical aberration that changes according to the depth of an observation target surface. Such techniques are described, for example, in Patent Documents 1 to 3.
 また、サンプル内部で発生する球面収差量は、サンプルの屈折率分布に依存する。このため、球面収差が補正される補正環の位置を知ることができれば、サンプルの屈折率を逆算することが可能である。このような技術は、例えば、特許文献2、特許文献3に記載されている。 Also, the amount of spherical aberration generated inside the sample depends on the refractive index distribution of the sample. Therefore, if the position of the correction ring in which the spherical aberration is corrected can be known, the refractive index of the sample can be calculated back. Such a technique is described in, for example, Patent Document 2 and Patent Document 3.
 球面収差が補正されているか否かの判定は、特許文献1から特許文献3に記載されるように、画像のコントラストを評価した評価値(以降、コントラスト値と記す)に基づいて行うことができる。これは、球面収差が補正された状態では、球面収差が補正されていない状態に比べて、コントラストの高い画像が得られるためである。 Whether or not the spherical aberration is corrected can be determined based on an evaluation value (hereinafter referred to as a contrast value) that evaluates the contrast of an image, as described in Patent Documents 1 to 3. .. This is because in the state where the spherical aberration is corrected, an image having higher contrast can be obtained as compared with the state where the spherical aberration is not corrected.
特開2014-160213号公報Japanese Unexamined Patent Publication No. 2014-160213 特開2017-026664号公報JP-A-2017-0266664 特開2017-026665号公報JP-A-2017-0266665
 ところで、球面収差が補正されているか否かを正確に判定するためには、S/N比が高い画像を用いてコントラスト値を算出することが望ましい。S/N比が低い画像では、球面収差量とは無関係なノイズの影響が相対的に大きくなり、球面収差に起因するコントラストの変化を正確に把握することが難しいからである。S/N比が高い画像は、例えば、照明光の強度を強くして、画像のシグナル成分をノイズ成分に対して大きくすることによって取得可能である。 By the way, in order to accurately determine whether or not spherical aberration is corrected, it is desirable to calculate the contrast value using an image having a high S / N ratio. This is because in an image having a low S / N ratio, the influence of noise irrelevant to the amount of spherical aberration becomes relatively large, and it is difficult to accurately grasp the change in contrast due to spherical aberration. An image having a high S / N ratio can be obtained, for example, by increasing the intensity of the illumination light and increasing the signal component of the image with respect to the noise component.
 しかしながら、照明光の強度が強くなりすぎると、画像に含まれる飽和画素の割合が増加してしまう。コントラスト値は、飽和画素の割合が増加すると、実際よりも低く算出されてしまうことがあり、その結果、不正確な位置を球面収差が補正される補正環の位置として認識してしまうことがある。また、球面収差が補正される補正環の最適位置が決まっていない状態で照明光の強度を調整しても、補正環調整を行う過程で補正環の設定を変更することによって、さらに明るさが変化してしまい、その結果、飽和画素の割合が増加してしまうことがある。つまり、補正環調整を開始する前に照明光の強度を適正に調整した場合であっても、同様の問題は生じ得る。 However, if the intensity of the illumination light becomes too strong, the proportion of saturated pixels contained in the image will increase. As the proportion of saturated pixels increases, the contrast value may be calculated lower than it actually is, and as a result, the inaccurate position may be recognized as the position of the correction ring in which spherical aberration is corrected. .. Further, even if the intensity of the illumination light is adjusted in a state where the optimum position of the correction ring for which spherical aberration is corrected is not determined, the brightness can be further increased by changing the setting of the correction ring in the process of adjusting the correction ring. It may change, resulting in an increase in the proportion of saturated pixels. That is, the same problem can occur even if the intensity of the illumination light is properly adjusted before the correction ring adjustment is started.
 以上では、補正環を例に説明したが、補正環に限らず、球面収差を補正する補正装置であれば、同様の技術的な課題が生じ得る。 In the above, the correction ring has been described as an example, but the same technical problem may occur if the correction device is not limited to the correction ring and corrects spherical aberration.
 以上のような実情を踏まえ、本開示の一つの目的は、飽和画素が生じた場合であっても球面収差が補正される設定を精度良く特定する技術を提供することである。 Based on the above circumstances, one object of the present disclosure is to provide a technique for accurately specifying a setting in which spherical aberration is corrected even when saturated pixels occur.
 本発明の一態様に係る顕微鏡システムは、球面収差補正装置を有し、前記球面収差補正装置の設定が異なる複数の状態の各々で画像データを取得することで複数の画像データを取得する顕微鏡装置と、前記複数の画像データの各々のコントラスト値を含む複数のコントラスト値に基づいて球面収差が補正される前記球面収差補正装置の設定を特定する演算装置と、を備え、前記演算装置は、前記複数の画像データに含まれる画素値が飽和した画素データの位置を第1の位置として特定し、前記複数の画像データの各々に含まれる前記第1の位置の画素データを除いて、前記複数のコントラスト値の各々を算出する。 The microscope system according to one aspect of the present invention has a spherical aberration correction device, and acquires a plurality of image data by acquiring image data in each of a plurality of states in which the settings of the spherical aberration correction device are different. And an arithmetic device that specifies the setting of the spherical aberration correction device that corrects spherical aberration based on a plurality of contrast values including the contrast values of the plurality of image data. The position of the pixel data in which the pixel values included in the plurality of image data is saturated is specified as the first position, and the plurality of image data are excluded from the pixel data of the first position included in each of the plurality of image data. Each of the contrast values is calculated.
 本発明の一実施形態に係る設定探索方法は、球面収差が補正される設定を探索する設定探索方法であって、球面収差補正装置の設定が異なる複数の状態の各々で画像データを取得することで複数の画像データを取得し、前記複数の画像データに含まれる画素値が飽和した画素データの位置を第1の位置として特定し、前記複数の画像データの各々に含まれる前記第1の位置の画素データを除いて、前記複数の画像データの各々のコントラスト値を算出し、前記複数の画像データの各々のコントラスト値を含む複数のコントラスト値に基づいて球面収差が補正される前記球面収差補正装置の設定を特定する。 The setting search method according to the embodiment of the present invention is a setting search method for searching for a setting in which spherical aberration is corrected, and acquires image data in each of a plurality of states in which the settings of the spherical aberration correction device are different. Acquires a plurality of image data in The spherical aberration correction in which the contrast value of each of the plurality of image data is calculated excluding the pixel data of the above, and the spherical aberration is corrected based on the plurality of contrast values including the contrast value of each of the plurality of image data. Identify the device settings.
 本発明の一実施形態に係るプログラムは、コンピュータに、球面収差補正装置の設定が異なる状態で取得した複数の画像データに含まれる画素値が飽和した画素データの位置を第1の位置として特定し、前記複数の画像データの各々に含まれる前記第1の位置の画素データを除いて、前記複数の画像データの各々のコントラスト値を算出し、前記複数の画像データの各々のコントラスト値を含む複数のコントラスト値に基づいて球面収差が補正される前記球面収差補正装置の設定を特定する、処理を実行させる。 The program according to the embodiment of the present invention specifies to the computer the position of the pixel data saturated with the pixel values included in the plurality of image data acquired with different settings of the spherical aberration correction device as the first position. , The contrast value of each of the plurality of image data is calculated excluding the pixel data of the first position included in each of the plurality of image data, and the plurality of including the contrast value of each of the plurality of image data. The process of specifying the setting of the spherical aberration correction device for which the spherical aberration is corrected based on the contrast value of the above is executed.
 本開示によれば、飽和画素が生じた場合であっても球面収差が補正される設定を精度良く特定することができる。 According to the present disclosure, it is possible to accurately specify the setting in which spherical aberration is corrected even when saturated pixels occur.
第1の実施形態に係る顕微鏡システムの構成を例示した図である。It is a figure which illustrated the structure of the microscope system which concerns on 1st Embodiment. 図1に示す演算装置の構成を例示した図である。It is a figure which illustrated the structure of the arithmetic unit shown in FIG. 図1に示す顕微鏡装置の構成を例示した図である。It is a figure which illustrated the structure of the microscope apparatus shown in FIG. 第1の実施形態に係る補正環設定処理のフローチャートである。It is a flowchart of the correction ring setting process which concerns on 1st Embodiment. 飽和画素位置特定処理のフローチャートである。It is a flowchart of the saturated pixel position specifying process. コントラスト算出処理のフローチャートである。It is a flowchart of contrast calculation processing. 従来のコントラスト値の算出方法を説明するための図である。It is a figure for demonstrating the conventional method of calculating a contrast value. 第1の実施形態におけるコントラスト値の算出方法の一例を説明するための図である。It is a figure for demonstrating an example of the calculation method of the contrast value in 1st Embodiment. 第1の実施形態におけるコントラスト値の算出方法の別の例を説明するための図である。It is a figure for demonstrating another example of the method of calculating a contrast value in 1st Embodiment. 第2の実施形態に係る補正環設定処理のフローチャートである。It is a flowchart of the correction ring setting process which concerns on 2nd Embodiment. 図10に示す補正環設定処理の変形例のフローチャートである。It is a flowchart of the modification of the correction ring setting process shown in FIG. 第3の実施形態に係る関係算出処理のフローチャートである。It is a flowchart of the relation calculation process which concerns on 3rd Embodiment. 補正環の設定と観察深さの関係の算出方法の一例を説明するための図である。It is a figure for demonstrating an example of the calculation method of the relationship between the setting of a correction ring and the observation depth. 第3の実施形態に係るZ-Series撮影処理のフローチャートである。It is a flowchart of the Z-Series shooting process which concerns on 3rd Embodiment. 第4の実施形態に係る屈折率表示処理のフローチャートである。It is a flowchart of the refractive index display processing which concerns on 4th Embodiment. 屈折率の表示方法の一例を説明するための図である。It is a figure for demonstrating an example of the display method of a refractive index. 屈折率の表示方法の一例を説明するための別の図である。It is another figure for demonstrating an example of the display method of a refractive index. コントラスト値の算出方法の更に別の例を説明するための図である。It is a figure for demonstrating still another example of the calculation method of a contrast value.
[第1の実施形態]
 図1は、本実施形態に係る顕微鏡システム1の構成を例示した図である。図2は、図1に示す演算装置20の構成を例示した図である。図3は、図1に示す顕微鏡100の構成を例示した図である。
[First Embodiment]
FIG. 1 is a diagram illustrating the configuration of the microscope system 1 according to the present embodiment. FIG. 2 is a diagram illustrating the configuration of the arithmetic unit 20 shown in FIG. FIG. 3 is a diagram illustrating the configuration of the microscope 100 shown in FIG.
 図1に示す顕微鏡システム1は、顕微鏡100と、顕微鏡制御装置10と、演算装置20と、表示装置30と、演算装置20への指示を入力するための複数の入力装置(キーボード40、補正環操作装置50、焦準操作装置60)を備えている。なお、以降では、顕微鏡100と顕微鏡制御装置10を顕微鏡装置と総称する。 The microscope system 1 shown in FIG. 1 includes a microscope 100, a microscope control device 10, an arithmetic unit 20, a display device 30, and a plurality of input devices (keyboard 40, correction ring) for inputting instructions to the arithmetic unit 20. The operation device 50 and the focusing operation device 60) are provided. Hereinafter, the microscope 100 and the microscope control device 10 will be collectively referred to as a microscope device.
 顕微鏡制御装置10は、演算装置20からの指示に従って顕微鏡100の動作を制御する装置であり、顕微鏡100の各種電動部の動作を制御する制御信号を生成する。また、顕微鏡100からの信号に基づいて画像データを生成する。顕微鏡制御装置10は、光源の出力を制御する光源制御装置11と、ズーム倍率を制御するズーム制御装置12と、観察対象面の光軸方向の位置(以降、単に、観察対象面の位置と記す)を制御する焦準制御装置13と、補正環111の設定を制御する補正環制御装置14と、を備えている。なお、補正環111は、球面収差を補正する球面収差補正装置の一例であり、補正環制御装置14は、補正制御装置の一例である。補正環111の設定は、例えば、基準位置に対する補正環111の回転角度(以降、単に、補正環111の角度と記す)のことである。 The microscope control device 10 is a device that controls the operation of the microscope 100 according to an instruction from the arithmetic unit 20, and generates control signals for controlling the operation of various electric parts of the microscope 100. In addition, image data is generated based on the signal from the microscope 100. The microscope control device 10 includes a light source control device 11 that controls the output of the light source, a zoom control device 12 that controls the zoom magnification, and a position of the observation target surface in the optical axis direction (hereinafter, simply referred to as the position of the observation target surface). ), And a correction ring control device 14 for controlling the setting of the correction ring 111. The correction ring 111 is an example of a spherical aberration correction device that corrects spherical aberration, and the correction ring control device 14 is an example of a correction control device. The setting of the correction ring 111 is, for example, the rotation angle of the correction ring 111 with respect to the reference position (hereinafter, simply referred to as the angle of the correction ring 111).
 演算装置20は、各種の演算処理を行うコンピュータであり、例えば、図2に示すように、プロセッサ21、メモリ22、入力I/F装置23、出力I/F装置24、可搬記録媒体26が挿入される可搬記録媒体駆動装置25を備え、これらがバス27によって相互に接続されている。なお、図2は、演算装置20の構成の一例であり、演算装置20はこの構成に限定されるものではない。 The arithmetic unit 20 is a computer that performs various arithmetic processes. For example, as shown in FIG. 2, the processor 21, the memory 22, the input I / F device 23, the output I / F device 24, and the portable recording medium 26 are included. A portable recording medium drive device 25 to be inserted is provided, and these are connected to each other by a bus 27. Note that FIG. 2 is an example of the configuration of the arithmetic unit 20, and the arithmetic unit 20 is not limited to this configuration.
 プロセッサ21は、1つ以上のプロセッサを含んでいる。1つ以上のプロセッサは、例えば、中央処理装置(CPU:Central Processing Unit)、GPU(Graphics Processing Unit)、DSP(Digital Signal Processor)などを含んでもよい。また、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)などを含んでもよい。プロセッサ21は、例えば、所定のソフトウェアプログラムを実行して演算処理を行ってもよい。 Processor 21 includes one or more processors. The one or more processors may include, for example, a central processing unit (CPU: Central Processing Unit), a GPU (Graphics Processing Unit), a DSP (Digital Signal Processor), and the like. Further, ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array) and the like may be included. The processor 21 may, for example, execute a predetermined software program to perform arithmetic processing.
 メモリ22は、プロセッサ21が実行するソフトウェアプログラムを格納した、非一時的なコンピュータ可読媒体を含んでいる。メモリ22は、例えば、1つ又は複数の任意の半導体メモリを含んでもよく、さらに、1つ又は複数のその他の記憶装置を含んでもよい。半導体メモリは、例えば、RAM(Random Access Memory)などの揮発性メモリ、ROM(Read Only Memory)、プログラマブルROM、フラッシュメモリなどの不揮発性メモリを含んでいる。RAMには、例えば、DRAM(Dynamic Random Access Memory)、SRAM(Static Random Access Memory)などが含まれてもよい。その他の記憶装置には、例えば磁気ディスクを含む磁気記憶装置、例えば光ディスクを含む光学記憶装置などが含まれてもよい。 The memory 22 includes a non-temporary computer-readable medium that stores a software program executed by the processor 21. The memory 22 may include, for example, one or more arbitrary semiconductor memories, and may further include one or more other storage devices. The semiconductor memory includes, for example, a volatile memory such as a RAM (Random Access Memory), a non-volatile memory such as a ROM (Read Only Memory), a programmable ROM, and a flash memory. The RAM may include, for example, a DRAM (Dynamic Random Access Memory), a SRAM (Static Random Access Memory), or the like. Other storage devices may include, for example, a magnetic storage device including a magnetic disk, for example, an optical storage device including an optical disk.
 入力I/F装置23は、キーボード40、補正環操作装置50、焦準操作装置60、及び表示装置30からの信号を受信する。また、入力I/F装置23は、顕微鏡100からの信号も受信する。出力I/F装置24は、表示装置30及び顕微鏡制御装置10へ信号を出力する。可搬記録媒体駆動装置25は、可搬記録媒体26を収容するものである。 The input I / F device 23 receives signals from the keyboard 40, the correction ring operation device 50, the focusing operation device 60, and the display device 30. The input I / F device 23 also receives a signal from the microscope 100. The output I / F device 24 outputs a signal to the display device 30 and the microscope control device 10. The portable recording medium driving device 25 accommodates the portable recording medium 26.
 演算装置20は、メモリ22または可搬記録媒体26に格納されているプログラムをプロセッサ21がメモリ22に読み出して実行することで、様々な手段として動作する。演算装置20は、例えば、画像データのコントラスト値を算出する手段(コントラスト算出手段)、球面収差が補正される補正環111の設定を算出する手段(目標値算出手段)、サンプルSの屈折率を算出する手段(屈折率算出手段)、及び、表示装置30を制御する手段(表示制御手段)として動作する。 The arithmetic unit 20 operates as various means by the processor 21 reading the program stored in the memory 22 or the portable recording medium 26 into the memory 22 and executing the program. The calculation device 20 uses, for example, a means for calculating the contrast value of the image data (contrast calculation means), a means for calculating the setting of the correction ring 111 for which spherical aberration is corrected (target value calculation means), and a refractive index of the sample S. It operates as a means for calculating (refractive index calculation means) and a means for controlling the display device 30 (display control means).
 表示装置30は、例えば、液晶ディスプレイ装置、有機ELディスプレイ装置、CRTディスプレイ装置などである。なお、表示装置30は、タッチパネルセンサを備えてもよく、その場合、入力装置としても機能する。 The display device 30 is, for example, a liquid crystal display device, an organic EL display device, a CRT display device, or the like. The display device 30 may include a touch panel sensor, and in that case, also functions as an input device.
 補正環操作装置50は、補正環111の設定を指示するための入力手段である。利用者が補正環操作装置50で補正環111の設定を指示すると、補正環制御装置14は、補正環111の設定を指示された設定に変更する。 The correction ring operation device 50 is an input means for instructing the setting of the correction ring 111. When the user instructs the correction ring operation device 50 to set the correction ring 111, the correction ring control device 14 changes the setting of the correction ring 111 to the instructed setting.
 焦準操作装置60は、観察対象面の位置(即ち、観察深さ)の変更を指示するための入力手段である。利用者が焦準操作装置60を用いて観察対象面の位置の変更を指示すると、焦準制御装置13は、焦準装置109を光軸方向に移動させて観察対象面の位置を変更する。 The focusing operation device 60 is an input means for instructing a change in the position (that is, the observation depth) of the observation target surface. When the user instructs the change of the position of the observation target surface using the focusing operation device 60, the focusing control device 13 moves the focusing device 109 in the optical axis direction to change the position of the observation target surface.
 顕微鏡100は、例えば、2光子励起顕微鏡である。サンプルSは、例えば、マウスの脳などの生体試料であるが、生体試料に限らない。顕微鏡100は、図3に示すように、照明光路上に、レーザー101と、走査ユニット102と、瞳投影光学系103と、ミラー104と、ダイクロイックミラー105と、対物レンズ110とを備えている。 The microscope 100 is, for example, a two-photon excitation microscope. Sample S is, for example, a biological sample such as mouse brain, but is not limited to the biological sample. As shown in FIG. 3, the microscope 100 includes a laser 101, a scanning unit 102, a pupil projection optical system 103, a mirror 104, a dichroic mirror 105, and an objective lens 110 on the illumination optical path.
 レーザー101は、例えば、超短パルスレーザーであり、近赤外域のレーザー光を発振する。レーザー101の出力は、光源制御装置11によって制御される。即ち、光源制御装置11は、サンプルに照射するレーザー光の出力を制御するレーザー制御装置である。 The laser 101 is, for example, an ultrashort pulse laser, which oscillates laser light in the near infrared region. The output of the laser 101 is controlled by the light source control device 11. That is, the light source control device 11 is a laser control device that controls the output of the laser light irradiating the sample.
 走査ユニット102は、レーザー光でサンプルSを2次元に走査するための走査手段であり、例えば、ガルバノスキャナとレゾナントスキャナを含んでいる。走査ユニット102の走査範囲が変化することでズーム倍率が変化する。走査ユニット102の走査範囲は、ズーム制御装置12によって制御される。 The scanning unit 102 is a scanning means for scanning the sample S two-dimensionally with a laser beam, and includes, for example, a galvano scanner and a resonant scanner. The zoom magnification changes as the scanning range of the scanning unit 102 changes. The scanning range of the scanning unit 102 is controlled by the zoom control device 12.
 瞳投影光学系103は、走査ユニット102の像を対物レンズ110の瞳位置に投影する光学系である。ダイクロイックミラー105は、励起光(レーザ光)とサンプルSからの検出光(蛍光)とを分離する光分離手段であり、波長によってレーザー光と蛍光を分離する。 The pupil projection optical system 103 is an optical system that projects the image of the scanning unit 102 onto the pupil position of the objective lens 110. The dichroic mirror 105 is an optical separation means for separating the excitation light (laser light) and the detection light (fluorescence) from the sample S, and separates the laser light and the fluorescence according to the wavelength.
 対物レンズ110は、補正環111を備えた乾燥系又は液浸系の対物レンズであり、焦準装置109に装着されている。焦準装置109は、対物レンズ110を対物レンズ110の光軸方向に移動させる手段であり、焦準装置109の移動(即ち、対物レンズ110の移動)は、焦準制御装置13によって制御される。 The objective lens 110 is a dry or immersion objective lens provided with a correction ring 111, and is attached to the focusing device 109. The focusing device 109 is a means for moving the objective lens 110 in the optical axis direction of the objective lens 110, and the movement of the focusing device 109 (that is, the movement of the objective lens 110) is controlled by the focusing control device 13. ..
 補正環111は、その設定に応じて対物レンズ110を構成するレンズの一部を光軸方向に動かして、球面収差を補正する補正装置である。補正環111の設定は、補正環制御装置14(補正装置制御装置)によって変更される。なお、補正環111の設定は、補正環111を直接操作することで、手動で変更することもできる。 The correction ring 111 is a correction device that corrects spherical aberration by moving a part of the lenses constituting the objective lens 110 in the optical axis direction according to the setting. The setting of the correction ring 111 is changed by the correction ring control device 14 (correction device control device). The setting of the correction ring 111 can also be changed manually by directly operating the correction ring 111.
 顕微鏡100は、さらに、検出光路(ダイクロイックミラー105の反射光路)上に、瞳投影光学系106と、光検出器107とを備えている。光検出器107から出力された信号は、A/D変換器108に出力される。 The microscope 100 further includes a pupil projection optical system 106 and a photodetector 107 on the detection optical path (reflected optical path of the dichroic mirror 105). The signal output from the photodetector 107 is output to the A / D converter 108.
 瞳投影光学系106は、対物レンズ110の瞳の像を光検出器107に投影する光学系である。光検出器107は、例えば、光電子増倍管(PMT)であり、入射した蛍光の光量に応じたアナログ信号を出力する。A/D変換器108は、光検出器107からのアナログ信号をデジタル信号(輝度信号)に変換して、顕微鏡制御装置10に出力する。光検出器107の感度は、図示しない検出感度制御装置によって制御される。なお、検出感度制御は、例えば、光検出器107に印加する印加電圧の調整、光検出器107から出力されるアナログ信号の増幅率調整、デジタル信号の段階での増幅率調整のいずれか、または、それらの組合せにより行われる。 The pupil projection optical system 106 is an optical system that projects the pupil image of the objective lens 110 onto the photodetector 107. The photodetector 107 is, for example, a photomultiplier tube (PMT), which outputs an analog signal according to the amount of incident fluorescence. The A / D converter 108 converts the analog signal from the photodetector 107 into a digital signal (luminance signal) and outputs it to the microscope control device 10. The sensitivity of the photodetector 107 is controlled by a detection sensitivity control device (not shown). The detection sensitivity control is, for example, any one of adjustment of the applied voltage applied to the photodetector 107, adjustment of the amplification factor of the analog signal output from the photodetector 107, adjustment of the amplification factor at the stage of the digital signal, or , A combination of them.
 以上のように構成された顕微鏡システム1では、顕微鏡100は、走査ユニット102を用いてレーザー光で対物レンズ110の光軸と直交する方向にサンプルSを走査して、サンプルSの各位置からの蛍光を光検出器107で検出する。顕微鏡制御装置10は、走査ユニット102の走査タイミングに合わせてA/D変換器108でサンプリングされたアナログ信号に基づいて、画像データを生成し、演算装置20へ出力する。即ち、画像データは、顕微鏡100と顕微鏡制御装置10を含む顕微鏡装置によって取得される。 In the microscope system 1 configured as described above, the microscope 100 uses the scanning unit 102 to scan the sample S with laser light in a direction orthogonal to the optical axis of the objective lens 110, and from each position of the sample S. The fluorescence is detected by the photodetector 107. The microscope control device 10 generates image data based on the analog signal sampled by the A / D converter 108 in accordance with the scanning timing of the scanning unit 102, and outputs the image data to the arithmetic unit 20. That is, the image data is acquired by a microscope device including the microscope 100 and the microscope control device 10.
 図4は、第1の実施形態に係る補正環設定処理のフローチャートである。図5は、飽和画素位置特定処理のフローチャートである。図6は、コントラスト算出処理のフローチャートである。以下、図4から図6を参照しながら、観察対象面において生じる球面収差を補正するために、顕微鏡システム1で行われる補正環設定処理について説明する。なお、図4に示す補正環設定処理は、球面収差が補正される設定を探索する設定探索方法の一例であり、例えば、利用者が焦準操作装置60を操作して観察対象面の位置を指定することによって開始される。 FIG. 4 is a flowchart of the correction ring setting process according to the first embodiment. FIG. 5 is a flowchart of the saturated pixel position specifying process. FIG. 6 is a flowchart of the contrast calculation process. Hereinafter, the correction ring setting process performed by the microscope system 1 in order to correct the spherical aberration generated on the observation target surface will be described with reference to FIGS. 4 to 6. The correction ring setting process shown in FIG. 4 is an example of a setting search method for searching for a setting in which spherical aberration is corrected. For example, a user operates a focusing operation device 60 to determine the position of an observation target surface. Started by specifying.
 まず最初に、顕微鏡システム1は、サンプルS内のコントラスト評価の対象とすべき範囲(以降、注目領域と記す)の指定を受け付ける(ステップS1)。観察者による注目領域の指定は、例えば、観察者が表示装置30に表示されたサンプルSのライブ画像を見ながら、より良好に観察すべき部分(例えば、サンプルS内の特徴的な形状を有する部分)が含まれるようにキーボード40などの入力装置を用いることによって行なわれる。なお、画像全体でコントラストを評価する場合には、ステップS1は省略してもよい。 First of all, the microscope system 1 accepts the designation of the range (hereinafter referred to as the region of interest) to be the target of the contrast evaluation in the sample S (step S1). The designation of the region of interest by the observer has, for example, a portion to be observed better (for example, a characteristic shape in the sample S) while the observer is viewing the live image of the sample S displayed on the display device 30. Part) is included by using an input device such as a keyboard 40. When evaluating the contrast of the entire image, step S1 may be omitted.
 次に、顕微鏡システム1は、補正環111の設定を初期設定に変更する(ステップS2)。ここでは、演算装置20は、補正環制御装置14を制御して、補正環111の角度を、例えば、可動範囲の一端の角度に設定する。 Next, the microscope system 1 changes the setting of the correction ring 111 to the initial setting (step S2). Here, the arithmetic unit 20 controls the correction ring control device 14 to set the angle of the correction ring 111 to, for example, the angle of one end of the movable range.
 その後、顕微鏡システム1は、画像データを取得し(ステップS3)、予め決められた全ての補正環111の設定で画像データが取得されたか否かを判定する(ステップS4)。全ての補正環111の設定で画像データが取得されていないと判定すると(ステップS4NO)、顕微鏡システム1は、補正環111の設定を変更し(ステップS5)、ステップS3に戻って再び画像データを取得する。これにより、顕微鏡システム1は、予め決められた全ての補正環111の設定で画像データが取得されるまで(ステップS4YES)、画像データの取得(ステップS3)と補正環111の設定変更(ステップS5)を繰り返す。ここでは、演算装置20は、補正環制御装置14を制御して、補正環111の角度を、可動範囲の一端である初期位置から所定角度ずつ動かすことで、可動範囲の他端まで動かす。補正環制御装置14のこのような一連の制御動作中に、顕微鏡制御装置10が補正環111の各角度で画像データを取得して演算装置20へ出力することで、演算装置20は、同一の観察対象面に対して、複数の画像データを取得する。即ち、ステップS3からステップS5では、顕微鏡装置は、補正環111の設定が異なる複数の状態の各々で画像データを取得することで複数の画像データを取得し、演算装置20へ出力する。なお、演算装置20は、顕微鏡装置から取得した複数の画像データを、メモリ22へ格納する。 After that, the microscope system 1 acquires the image data (step S3) and determines whether or not the image data has been acquired with all the predetermined correction ring 111 settings (step S4). When it is determined that the image data has not been acquired by setting all the correction rings 111 (step S4NO), the microscope system 1 changes the settings of the correction ring 111 (step S5), returns to step S3, and re-uses the image data. get. As a result, the microscope system 1 acquires the image data (step S3) and changes the settings of the correction ring 111 (step S5) until the image data is acquired with all the predetermined correction ring 111 settings (step S4YES). )repeat. Here, the arithmetic unit 20 controls the correction ring control device 14 to move the angle of the correction ring 111 from the initial position, which is one end of the movable range, by a predetermined angle to the other end of the movable range. During such a series of control operations of the correction ring control device 14, the microscope control device 10 acquires image data at each angle of the correction ring 111 and outputs the image data to the calculation device 20, so that the calculation device 20 is the same. Acquire a plurality of image data for the observation target surface. That is, in steps S3 to S5, the microscope device acquires a plurality of image data by acquiring image data in each of a plurality of states in which the setting of the correction ring 111 is different, and outputs the plurality of image data to the arithmetic unit 20. The arithmetic unit 20 stores a plurality of image data acquired from the microscope device in the memory 22.
 複数の画像データを取得すると、顕微鏡システム1は、画像データに含まれる飽和画素の位置を特定する(ステップS6)。ここでは、演算装置20は、例えば、図5に示す飽和画素位置特定処理を開始し、複数の画像データに含まれる画素値が飽和した画素データの位置を第1の位置として特定する。 When a plurality of image data are acquired, the microscope system 1 identifies the positions of saturated pixels included in the image data (step S6). Here, the arithmetic unit 20 starts, for example, the saturated pixel position specifying process shown in FIG. 5, and specifies the position of the pixel data in which the pixel values included in the plurality of image data are saturated as the first position.
 具体的には、演算装置20は、メモリ22に格納された複数の画像データから1つの画像データを読み出して(ステップS11)、読み出した画像データに画素値が飽和した画素データが含まれているか否かを判定する(ステップS12)。そして、画素値が飽和した画素データが含まれている場合には、その画素データの画素(以降、飽和画素と記す)の位置を第1の位置として記憶する(ステップS13)。演算装置20は、ステップS11からステップS13の処理を、メモリ22に格納された複数の画像データの全てに対して行い(ステップS14YES)、図5に示す飽和画素位置特定処理を終了する。 Specifically, the arithmetic unit 20 reads one image data from the plurality of image data stored in the memory 22 (step S11), and does the read image data include pixel data with saturated pixel values? Whether or not it is determined (step S12). Then, when pixel data in which the pixel value is saturated is included, the position of the pixel of the pixel data (hereinafter referred to as saturated pixel) is stored as the first position (step S13). The arithmetic unit 20 performs the processes of steps S11 to S13 for all of the plurality of image data stored in the memory 22 (step S14YES), and ends the saturated pixel position specifying process shown in FIG.
 飽和画素位置特定処理が終了すると、顕微鏡システム1は、複数の画像データの各々のコントラスト値を算出する(ステップS7)。ここでは、演算装置20は、例えば、図6に示すコントラスト算出処理を開始し、複数の画像データから複数のコントラスト値を算出する。 When the saturated pixel position identification process is completed, the microscope system 1 calculates the contrast value of each of the plurality of image data (step S7). Here, the arithmetic unit 20 starts, for example, the contrast calculation process shown in FIG. 6 and calculates a plurality of contrast values from the plurality of image data.
 具体的には、演算装置20は、飽和画素の位置である第1の位置をメモリ22から読み出す(ステップS21)。さらに、演算装置20は、メモリ22に格納された複数の画像データから1つの画像データを読み出して(ステップS22)、読み出した画像データのコントラスト値を、画像データと第1の位置とを用いて算出する(ステップS23)。なお、コントラスト値の算出方法の詳細については後述する。演算装置20は、ステップS22及びステップS23の処理を、メモリ22に格納された複数の画像データの全てに対して行い(ステップS24YES)、図6に示すコントラスト算出処理を終了する。 Specifically, the arithmetic unit 20 reads the first position, which is the position of the saturated pixel, from the memory 22 (step S21). Further, the arithmetic unit 20 reads one image data from the plurality of image data stored in the memory 22 (step S22), and uses the image data and the first position to set the contrast value of the read image data. Calculate (step S23). The details of the method of calculating the contrast value will be described later. The arithmetic unit 20 performs the processes of steps S22 and S23 on all of the plurality of image data stored in the memory 22 (step S24YES), and ends the contrast calculation process shown in FIG.
 コントラスト算出処理が終了し、複数の画像データの各々のコントラスト値を含む複数のコントラスト値が算出されると、顕微鏡システム1は、球面収差が補正される設定を目標設定として特定する(ステップS8)。ここでは、演算装置20は、ステップS7で算出した複数のコントラスト値に基づいて、球面収差が補正される補正環111の設定、つまり、補正環111の角度、を特定する。より具体的には、演算装置20は、例えば、複数のコントラスト値のうちの最大のコントラスト値を特定し、その最大のコントラスト値を有する画像データを取得したときの補正環111の角度を目標設定として特定してもよい。また、複数のコントラスト値とそれらに対応する補正環111の角度との組み合わせからコントラスト値が最大になる補正環111の角度を推定して、推定した角度を目標設定として特定してもよい。 When the contrast calculation process is completed and a plurality of contrast values including the contrast values of the plurality of image data are calculated, the microscope system 1 specifies a setting for correcting spherical aberration as a target setting (step S8). .. Here, the arithmetic unit 20 specifies the setting of the correction ring 111 for which the spherical aberration is corrected, that is, the angle of the correction ring 111, based on the plurality of contrast values calculated in step S7. More specifically, the arithmetic unit 20 specifies, for example, the maximum contrast value among a plurality of contrast values, and sets a target for the angle of the correction ring 111 when the image data having the maximum contrast value is acquired. May be specified as. Further, the angle of the correction ring 111 that maximizes the contrast value may be estimated from the combination of the plurality of contrast values and the angle of the correction ring 111 corresponding thereto, and the estimated angle may be specified as the target setting.
 最後に、顕微鏡システム1は、目標設定に補正環111の設定を変更し(ステップS9)、補正環設定処理を終了する。ここでは、補正環制御装置14は、演算装置20の指示に従って、ステップS8で特定された設定に、補正環111の設定を変更する。これにより、球面収差が補正されるため、良好な画質でサンプルSを観察することができる。 Finally, the microscope system 1 changes the setting of the correction ring 111 to the target setting (step S9), and ends the correction ring setting process. Here, the correction ring control device 14 changes the setting of the correction ring 111 to the setting specified in step S8 according to the instruction of the arithmetic unit 20. As a result, spherical aberration is corrected, so that the sample S can be observed with good image quality.
 図7は、従来のコントラスト値の算出方法を説明するための図である。図8は、本実施形態におけるコントラスト値の算出方法の一例を説明するための図である。以下、図7及び図8を参照しながら、顕微鏡システム1が行うコントラスト算出方法について、従来のコントラスト算出方法と異なる点を中心に説明する。 FIG. 7 is a diagram for explaining a conventional method of calculating a contrast value. FIG. 8 is a diagram for explaining an example of the method of calculating the contrast value in the present embodiment. Hereinafter, the contrast calculation method performed by the microscope system 1 will be described with reference to FIGS. 7 and 8, focusing on points different from the conventional contrast calculation method.
 画像のコントラストを評価する評価式の一つに、画素間での画素値の差分に基づいて画像のコントラストを評価する評価式が知られている。具体的な評価式としては、例えば、x方向にn画素分ずれた位置にある2つの画素の画素値の差分の2乗を、画像データ全体で積算してコントラスト値を算出する下式が知られている。下式は、J.F.Brennerらによって提案された評価式であり、Brenner gradientと呼ばれている。
Figure JPOXMLDOC01-appb-M000001
As one of the evaluation formulas for evaluating the contrast of an image, an evaluation formula for evaluating the contrast of an image based on the difference in pixel values between pixels is known. As a specific evaluation formula, for example, the following formula for calculating the contrast value by integrating the square of the difference between the pixel values of two pixels located at positions shifted by n pixels in the x direction over the entire image data is known. Has been done. The following formula is J. F. It is an evaluation formula proposed by Brenner et al. And is called a Brenner gradient.
Figure JPOXMLDOC01-appb-M000001
 ここで、FBrennerはコントラスト値であり、xは画像を構成する画素の列を特定する変数であり、yは画像を構成する画素の行を特定する変数である。Wは画像を構成する画素のx方向の画素数(即ち、列数)であり、Hは画像を構成する画素のy方向の画素数(即ち、行数)である。fは特定された画素に対応する画素データの画素値である。nはシフト量であり、画素値の差分が算出される画素間の間隔を示す整数(例えば、2など)である。 Here, FBrenner is a contrast value, x is a variable that specifies a column of pixels that make up an image, and y is a variable that specifies a row of pixels that make up an image. W is the number of pixels in the x direction (that is, the number of columns) of the pixels that make up the image, and H is the number of pixels in the y direction (that is, the number of rows) of the pixels that make up the image. f is a pixel value of pixel data corresponding to the specified pixel. n is a shift amount, and is an integer (for example, 2 or the like) indicating an interval between pixels for which the difference between pixel values is calculated.
 Brenner gradientのような、画素間での画素値の差分に基づいて画像のコントラストを評価する評価式は、画像データに含まれる最大画素値と最小画素値から画像のコントラストを評価する評価式よりも、画像に生じるコントラスト変化を詳細に捉えることができる。このため、球面収差の補正状態の評価に好適である。 An evaluation formula that evaluates the contrast of an image based on the difference in pixel values between pixels, such as Brenner gradient, is more than an evaluation formula that evaluates the contrast of an image from the maximum pixel value and the minimum pixel value included in the image data. , The contrast change that occurs in the image can be captured in detail. Therefore, it is suitable for evaluating the corrected state of spherical aberration.
 その一方で、S/N比を稼ぐために照明強度が強い状態で画像を取得した場合、図7に示すように、球面収差が補正されて画像が明るくなるにつれて飽和画素の画素数が増加することがあり、Brenner gradientを用いた従来のコントラスト算出方法では、目標設定を誤ってしまうことがある。これは、飽和画素の数が増加すると、例えば、飽和画素間での画素値の差分が0になるなど、飽和画素を含む画素間での画素値の差分が実際よりも小さく見積もられてしまい、その結果として、算出されるコントラスト値が実際のコントラスト値に対して下振れしてしまうからである。 On the other hand, when an image is acquired in a state where the illumination intensity is strong in order to increase the S / N ratio, as shown in FIG. 7, the number of saturated pixels increases as the spherical aberration is corrected and the image becomes brighter. In some cases, the conventional contrast calculation method using the Brener gradient may make a mistake in setting the target. This is because when the number of saturated pixels increases, the difference in pixel values between saturated pixels becomes 0, for example, and the difference in pixel values between pixels including saturated pixels is estimated to be smaller than the actual value. As a result, the calculated contrast value falls below the actual contrast value.
 図7には、補正環111の角度θが0°、10°、20°の状態で取得した画像データに含まれる画素データが模式的に描かれている。なお、図7では、説明を簡略化するため、1行6列に並んだ画素からなる画像が描かれているが、画像を構成する画素の行数及び列数はこの例に限らない。この例では、画像データ間で画素データを比較すると、角度θを大きくするにつれて画素値が増加する傾向を確認することができるため、角度θが20°のときに最も球面収差が補正された状態にあると考えられる。しかしながら、算出されたコントラスト値としては、角度θ=20°のときよりも角度θ=10°のときの方が大きな値となるため、演算装置20は、角度θ=10°を目標設定として誤認してしまう。 In FIG. 7, pixel data included in the image data acquired when the angle θ of the correction ring 111 is 0 °, 10 °, and 20 ° is schematically drawn. In FIG. 7, an image composed of pixels arranged in 1 row and 6 columns is drawn for simplification of the description, but the number of rows and columns of the pixels constituting the image is not limited to this example. In this example, when the pixel data is compared between the image data, it can be confirmed that the pixel value tends to increase as the angle θ is increased. Therefore, when the angle θ is 20 °, the spherical aberration is most corrected. It is thought that it is in. However, since the calculated contrast value is larger when the angle θ = 10 ° than when the angle θ = 20 °, the arithmetic unit 20 misidentifies the angle θ = 10 ° as the target setting. Resulting in.
 このように、球面収差の変動に伴うコントラストの変化を精度よく検出するために明るさを調整してS/N比を高めると、従来の算出方法では、却って目標設定を誤ってしまうことが起こり得る。 In this way, if the brightness is adjusted to increase the S / N ratio in order to accurately detect the change in contrast due to the fluctuation of spherical aberration, the conventional calculation method may cause the target setting to be incorrect. obtain.
 そこで、本願では、画像データに含まれる画素データの一部を除外して画像を比較したとしても、画像間の明るさの関係に大きな影響はないという点に着目して、上記の課題を解決する技術を提案する。 Therefore, in the present application, the above problem is solved by paying attention to the fact that even if the images are compared by excluding a part of the pixel data included in the image data, the relationship of brightness between the images is not significantly affected. Propose a technology to do.
 具体的には、演算装置20は、図6のステップS23において、ステップS22で読み出した画像データに含まれる第1の位置の画素データを、その読み出した画像データのコントラスト値を算出する計算から除外する。即ち、画像データに含まれる第1の位置の画素データを除いて、画像データのコントラスト値を算出する。より具体的には、図8に示すように、複数の画像データの各々のコントラスト値を算出する場合に、複数の画像データの少なくとも一つで画素値が飽和している画素データの位置(第1の位置)を特定し、その第1の位置に対応する画素データについては、その対応する画素データの画素値が飽和しているか否かにかかわらず、各画像データのコントラスト値の計算から除外する。なお、画素間での画素値の差分に基づいて画像のコントラストを評価する評価式を用いる点については、顕微鏡システム1が行うコントラスト算出方法は、従来のコントラスト算出方法と同様である。 Specifically, in step S23 of FIG. 6, the arithmetic unit 20 excludes the pixel data at the first position included in the image data read in step S22 from the calculation for calculating the contrast value of the read image data. do. That is, the contrast value of the image data is calculated by excluding the pixel data at the first position included in the image data. More specifically, as shown in FIG. 8, when calculating the contrast value of each of the plurality of image data, the position of the pixel data in which the pixel value is saturated in at least one of the plurality of image data (the first position). 1 position) is specified, and the pixel data corresponding to the first position is excluded from the calculation of the contrast value of each image data regardless of whether the pixel value of the corresponding pixel data is saturated or not. do. The contrast calculation method performed by the microscope system 1 is the same as the conventional contrast calculation method in that an evaluation formula for evaluating the contrast of an image based on the difference in pixel values between pixels is used.
 図8を参照しながらさらに詳細に説明する。図8には、図7と同様に、補正環111の角度θが0°、10°、20°の状態で取得した画像データに含まれる画素データが模式的に描かれている。この例では、画像データは8ビット画像データであり、角度θ=20°で取得した画像の左から3番目と4番目の画素において画素値が飽和している。このため、演算装置20は、ステップS23では、全ての画像データの左から3番目と4番目の画素データを除いて、画像データのコントラスト値を算出する。より詳細には、全ての画像(角度θ=0°、10°、20°で取得した画像)に対して、左から3番目と4番目の画素の少なくとも一方を含む画素間(具体的には、左から2番目と3番目、3番目と4番目、4番目と5番目)での画素値の差分を、コントラスト値算出のための積算対象から除外して、コントラスト値を算出する。 A more detailed explanation will be given with reference to FIG. Similar to FIG. 7, FIG. 8 schematically depicts pixel data included in the image data acquired when the angles θ of the correction ring 111 are 0 °, 10 °, and 20 °. In this example, the image data is 8-bit image data, and the pixel values are saturated in the third and fourth pixels from the left of the image acquired at an angle θ = 20 °. Therefore, in step S23, the arithmetic unit 20 calculates the contrast value of the image data by excluding the third and fourth pixel data from the left of all the image data. More specifically, for all images (images acquired at angles θ = 0 °, 10 °, 20 °), between pixels including at least one of the third and fourth pixels from the left (specifically, , 2nd and 3rd from the left, 3rd and 4th, 4th and 5th) are excluded from the integration target for calculating the contrast value, and the contrast value is calculated.
 これにより、飽和画素を含むことによって生じる画素間の差分の不正確さを排除することができる。さらに、一部の画素の存在を無視してコントラスト値を計算しても、複数の画像の各々で等しく無視している限り、図8のコントラスト値の関係によって示されるように、複数の画像のコントラストの相対的な関係は一般に維持される。このため、飽和画素が含まれている場合であっても、複数の画像間のコントラストを正しく比較して評価することができる。従って、顕微鏡システム1によれば、飽和画素が生じた場合であっても球面収差が補正される設定を精度良く特定することができる。 This makes it possible to eliminate the inaccuracy of the difference between pixels caused by the inclusion of saturated pixels. Furthermore, even if the contrast value is calculated by ignoring the presence of some pixels, as long as the contrast values are equally ignored in each of the plurality of images, as shown by the contrast value relationship in FIG. The relative relationship of contrast is generally maintained. Therefore, even when saturated pixels are included, the contrast between a plurality of images can be correctly compared and evaluated. Therefore, according to the microscope system 1, it is possible to accurately specify the setting in which the spherical aberration is corrected even when saturated pixels are generated.
 また、顕微鏡システム1によれば、飽和画素が生じた場合でも球面収差を適切に補正することができるため、図4に示す目標設定を探索する処理を開始する前に行われる、光源の出力調整や検出器の感度調整といった明るさ調整が容易になる。つまり、利用者は、飽和画素の発生を過度に心配することなく、良好なS/N比を得るための明るさ調整を行うことができる。また、球面収差を調整する過程で、補正環の設定を変えることによって新たに飽和画素が生じた場合であっても、球面収差が補正される設定を精度良く特定することができる。 Further, according to the microscope system 1, spherical aberration can be appropriately corrected even when saturated pixels are generated, so that the output of the light source is adjusted before starting the process of searching for the target setting shown in FIG. Brightness adjustment such as sensitivity adjustment of the detector and detector becomes easy. That is, the user can adjust the brightness to obtain a good S / N ratio without being excessively worried about the generation of saturated pixels. Further, even when a new saturated pixel is generated by changing the setting of the correction ring in the process of adjusting the spherical aberration, the setting for correcting the spherical aberration can be accurately specified.
 以上では、飽和画素の位置を第1の位置として特定し、第1の位置の画素の画素データを除いて各画像データのコントラスト値を算出する例を示したが、演算装置20は、さらに他の画素データを計算から除外してもよい。具体的には、演算装置20は、同一の観察対象面に対して取得された複数の画像データに含まれている画素値が閾値未満の画素データの位置を、第2の位置として特定し、それらの複数の画像データの各々に含まれる第1の位置と第2の位置の画素データを除いて、複数のコントラスト値の各々を算出してもよい。この点は、以降の実施形態においても同様である。なお、この閾値は、例えば、光が入射していない状態で光検出器から出力される信号に対応する画素値(以降、バックグラウンド輝度と記す)を基準に決定することが望ましい。 In the above, an example is shown in which the position of the saturated pixel is specified as the first position and the contrast value of each image data is calculated by excluding the pixel data of the pixel at the first position. Pixel data may be excluded from the calculation. Specifically, the arithmetic unit 20 specifies the position of the pixel data whose pixel value is less than the threshold value included in the plurality of image data acquired for the same observation target surface as the second position. Each of the plurality of contrast values may be calculated excluding the pixel data of the first position and the second position included in each of the plurality of image data. This point is the same in the subsequent embodiments. It is desirable that this threshold value is determined based on, for example, a pixel value (hereinafter referred to as background luminance) corresponding to a signal output from a photodetector in a state where no light is incident.
 バックグラウンド輝度を基準にして決定した閾値未満の画素値を有する画素データは、画像の何も映っていない部分に対応していることが想定される。このため、閾値未満の画素値を有する画素データをコントラスト値の計算から除外することで、バックグラウンドノイズの影響を抑えながら、コントラスト値を算出することができる。従って、第1の位置と第2の位置の両方を特定し、それらの位置の画素データをコントラスト値の計算から除外することで、球面収差が補正される設定をさらに精度良く特定することができる。図9を参照しながら詳細に説明すると、θ=0°の補正環位置において、左から6番目の画素(右端の画素)の輝度値がバックグラウンド輝度(ここではバックグラウンド輝度値を10とする)未満となっている。このため、左から5番目と6番目の画素値の差分を、全ての補正環位置(θ=0°、θ=10°、θ=20°)で取得した画像において、コントラスト値算出のための積算対象から除外すればよい。 It is assumed that the pixel data having a pixel value less than the threshold value determined based on the background brightness corresponds to the part of the image where nothing is displayed. Therefore, by excluding pixel data having a pixel value less than the threshold value from the calculation of the contrast value, the contrast value can be calculated while suppressing the influence of background noise. Therefore, by specifying both the first position and the second position and excluding the pixel data at those positions from the calculation of the contrast value, it is possible to more accurately specify the setting in which the spherical aberration is corrected. .. Explaining in detail with reference to FIG. 9, at the correction ring position of θ = 0 °, the brightness value of the sixth pixel from the left (the rightmost pixel) is the background brightness (here, the background brightness value is 10). ) Is less than. Therefore, in the image obtained by acquiring the difference between the fifth and sixth pixel values from the left at all the correction ring positions (θ = 0 °, θ = 10 °, θ = 20 °), the contrast value is calculated. It may be excluded from the calculation target.
[第2の実施形態]
 図10は、本実施形態に係る補正環設定処理のフローチャートである。本実施形態に係る顕微鏡システム(以降、単に顕微鏡システムと記す。)は、図4に示す補正環設定処理を行う代わりに、図10に示す補正環設定処理を行う点が、顕微鏡システム1とは異なっている。その他の点は、顕微鏡システム1と同様である。なお、図10に示す補正環設定処理は、補正環設定処理中に自動的に光量が調整される点が、図4に示す補正環設定処理とは異なっている。
[Second Embodiment]
FIG. 10 is a flowchart of the correction ring setting process according to the present embodiment. The microscope system according to the present embodiment (hereinafter, simply referred to as a microscope system) is different from the microscope system 1 in that the correction ring setting process shown in FIG. 10 is performed instead of the correction ring setting process shown in FIG. It's different. Other points are the same as those of the microscope system 1. The correction ring setting process shown in FIG. 10 is different from the correction ring setting process shown in FIG. 4 in that the amount of light is automatically adjusted during the correction ring setting process.
 具体的には、顕微鏡システムは、まず、ステップS31からステップS36において、補正環111の設定が異なる複数の状態の各々で画像データを取得することで複数の画像データを取得し、複数の画像データに含まれる飽和画素位置を第1の位置として特定する。これらの処理は、図4のステップS1からステップS6の処理と同様である。 Specifically, the microscope system first acquires a plurality of image data by acquiring image data in each of a plurality of states in which the setting of the correction ring 111 is different in steps S31 to S36, and the plurality of image data. The saturated pixel position included in is specified as the first position. These processes are the same as the processes of steps S1 to S6 of FIG.
 次に、顕微鏡システムは、飽和画素の割合が閾値TH1以上か否かを判定する(ステップS37)。ここでは、演算装置20は、まず、メモリ22に記憶されている第1の位置を読み出す。その後、演算装置20は、飽和画素の割合、つまり、画像の画素数に対する第1の位置の数の割合(=第1の位置の数/画像の画素数)を算出する。さらに、演算装置20は、算出した割合を閾値TH1と比較することで、上記の判定を行う。なお、閾値TH1は、例えば、0.5%などである。 Next, the microscope system determines whether or not the ratio of saturated pixels is the threshold value TH1 or more (step S37). Here, the arithmetic unit 20 first reads the first position stored in the memory 22. After that, the arithmetic unit 20 calculates the ratio of saturated pixels, that is, the ratio of the number of first positions to the number of pixels of the image (= the number of first positions / the number of pixels of the image). Further, the arithmetic unit 20 makes the above determination by comparing the calculated ratio with the threshold value TH1. The threshold value TH1 is, for example, 0.5%.
 ステップS37の判定の結果、飽和画素の割合が閾値TH1以上であると判定されると、顕微鏡システムは、レーザー101から出射される光量を抑制し(ステップS38)、さらに、ステップS36で特定した飽和画素位置をリセットする(ステップS39)。その後、顕微鏡システムは、再び、ステップS32からステップS37の処理を行う。これは、閾値TH1以上に飽和画素が生じていることから、照明光量が強すぎると考えられるためである。ステップS38では、演算装置20は、光源制御装置11を制御し、光源制御装置11は、演算装置20からの指示に従って、複数の画像データに含まれる画素値が飽和した画素データの割合(飽和画素の割合)に応じてレーザー101の出力を制御する。さらに具体的には、光源制御装置11は、飽和画素の割合が閾値TH1以上の場合に、レーザー101の出力を低下させる。なお、レーザー101の出力の低下量は、特に限定しないが、例えば、予め決められた一定量である。また、飽和画素の割合が閾値TH1を超えないようなレーザー101の出力を推定し、推定した出力になるようにレーザー101の出力を低下させてもよい。なお、ステップS39のリセット処理は、光量抑制前に特定された飽和画素位置が現在の光量設定における飽和画素位置と混同されることを防止する、という目的で行われる。 As a result of the determination in step S37, when it is determined that the ratio of saturated pixels is equal to or greater than the threshold value TH1, the microscope system suppresses the amount of light emitted from the laser 101 (step S38), and further, the saturation specified in step S36. The pixel position is reset (step S39). After that, the microscope system again performs the processes from step S32 to step S37. This is because the saturated pixels are generated above the threshold value TH1, and it is considered that the amount of illumination light is too strong. In step S38, the arithmetic unit 20 controls the light source control device 11, and the light source control device 11 follows an instruction from the arithmetic unit 20 to determine the ratio of pixel data in which the pixel values included in the plurality of image data are saturated (saturated pixels). The output of the laser 101 is controlled according to the ratio of the laser 101. More specifically, the light source control device 11 reduces the output of the laser 101 when the ratio of saturated pixels is the threshold value TH1 or more. The amount of decrease in the output of the laser 101 is not particularly limited, but is, for example, a predetermined constant amount. Further, the output of the laser 101 may be estimated so that the ratio of saturated pixels does not exceed the threshold value TH1, and the output of the laser 101 may be reduced so as to obtain the estimated output. The reset process in step S39 is performed for the purpose of preventing the saturated pixel position specified before the light amount suppression from being confused with the saturated pixel position in the current light amount setting.
 ステップS37の判定の結果、飽和画素の割合が閾値TH1未満であると判定されると、顕微鏡システムは、複数の画像データの各々のコントラスト値を算出し(ステップS40)、さらに、球面収差が補正される設定を目標設定として特定する(ステップS41)。最後に、顕微鏡システムは、標設定に補正環111の設定を変更し(ステップS42)、補正環設定処理を終了する。なお、ステップS40からステップS42の処理は、図4のステップS7からステップS9の処理と同様である。 As a result of the determination in step S37, when it is determined that the ratio of saturated pixels is less than the threshold value TH1, the microscope system calculates the contrast value of each of the plurality of image data (step S40), and further corrects the spherical aberration. The setting to be performed is specified as a target setting (step S41). Finally, the microscope system changes the setting of the correction ring 111 to the target setting (step S42), and ends the correction ring setting process. The processing of steps S40 to S42 is the same as the processing of steps S7 to S9 of FIG.
 本実施形態に係る顕微鏡システムによっても、顕微鏡システム1と同様の効果を得ることができる。なお、顕微鏡システム1において、輝度が飽和した画素の割合が高すぎる場合、コントラスト計算に使える画素が少ないため、調整の精度が低下する可能性がある。また、本実施形態では、レーザー101の出力が高すぎる場合に、自動的に適切な光量に調整される。この方法によれば、輝度が飽和した画素の割合を下げて、コントラスト計算に使える画素を増やすことができるので、顕微鏡システム1よりもさらに調整の精度を高めることができる。 The same effect as that of the microscope system 1 can be obtained by the microscope system according to the present embodiment. If the proportion of pixels whose brightness is saturated in the microscope system 1 is too high, the accuracy of adjustment may decrease because there are few pixels that can be used for contrast calculation. Further, in the present embodiment, when the output of the laser 101 is too high, the amount of light is automatically adjusted to an appropriate level. According to this method, the proportion of pixels whose brightness is saturated can be reduced to increase the number of pixels that can be used for contrast calculation, so that the adjustment accuracy can be further improved as compared with the microscope system 1.
 図11は、図10に示す補正環設定処理の変形例のフローチャートである。図10では、補正環111の全ての設定で画像を取得した後に、飽和画素の割合に基づいて光量調整の要否を判定する例を示したが、光量調整の要否の判定は任意のタイミングで行ってもよく、例えば、図11に示すように、画像データを取得する度に光量調整の要否を判定してもよい。 FIG. 11 is a flowchart of a modified example of the correction ring setting process shown in FIG. FIG. 10 shows an example in which the necessity of light intensity adjustment is determined based on the ratio of saturated pixels after the images are acquired with all the settings of the correction ring 111, but the determination of the necessity of light intensity adjustment is arbitrary timing. For example, as shown in FIG. 11, the necessity of adjusting the amount of light may be determined each time the image data is acquired.
 図11に示す補正環設定処理は、画像データを取得する度に飽和画素位置の特定と飽和画素の割合が閾値以上か否かの判定が行われる点(ステップS54、ステップS55)が、図10に示す補正環設定処理とは異なっている。 In the correction ring setting process shown in FIG. 11, each time image data is acquired, the saturated pixel position is specified and whether or not the ratio of saturated pixels is equal to or greater than the threshold value is determined (steps S54 and S55). It is different from the correction ring setting process shown in.
 なお、各ステップにおける処理内容は、図10に示す補正環設定処理の対応するステップの処理内容と同様である。このため、各ステップについての詳細な説明は省略する。 The processing content in each step is the same as the processing content in the corresponding step of the correction ring setting processing shown in FIG. Therefore, detailed description of each step will be omitted.
 変形例に係る補正環設定処理によれば、画像データを取得する度に飽和画素位置の特定と飽和画素の割合が閾値以上か否かの判定とを行うことで、光量調整が必要な状態を早期に発見することができる。このため、不適切な光量設定での画像取得回数を抑制することが可能であり、画像取得によって生じる試料への悪影響を抑制することができる。また、光量設定がより短時間で最適化されるため、設定処理に要するトータルの処理時間を短縮することができる。 According to the correction ring setting process according to the modification, the state in which the amount of light needs to be adjusted is determined by specifying the saturated pixel position and determining whether or not the ratio of saturated pixels is equal to or higher than the threshold value each time image data is acquired. It can be detected early. Therefore, it is possible to suppress the number of image acquisitions with an inappropriate light amount setting, and it is possible to suppress the adverse effect on the sample caused by the image acquisition. Further, since the light amount setting is optimized in a shorter time, the total processing time required for the setting processing can be shortened.
 本実施形態、および、本実施形態の変形例の顕微鏡システムでは、飽和した画素の割合を下げるために、レーザー光源の出力を制御(抑制)している。これに替えて、又はこれに加えて、図示しない検出感度制御装置により光検出器の感度を制御(低下)させてもよい。この場合、図10のフローチャートのステップS38、および、図11のフローチャートのステップS56のステップにおいて、「光量抑制」に替えて又は加えて、「光検出器感度抑制」を行えばよい。即ち、顕微鏡システムでは、飽和画素の割合が閾値を超えている場合には、レーザー-光源の出力と光検出器の感度の両方が制御されてもよく、どちらか一方のみが制御されてもよい。 In the microscope system of this embodiment and the modified example of this embodiment, the output of the laser light source is controlled (suppressed) in order to reduce the proportion of saturated pixels. Alternatively or in addition to this, the sensitivity of the photodetector may be controlled (decreased) by a detection sensitivity control device (not shown). In this case, in the step S38 of the flowchart of FIG. 10 and the step of step S56 of the flowchart of FIG. 11, "photodetector sensitivity suppression" may be performed in place of or in addition to "light amount suppression". That is, in a microscope system, when the percentage of saturated pixels exceeds the threshold, both the output of the laser-light source and the sensitivity of the photodetector may be controlled, or only one of them may be controlled. ..
[第3の実施形態]
 第1の実施形態及び第2の実施形態には、利用者が焦準操作装置60を操作することによって指定した観察対象面を観察するときに、複数の画像データを取得してその観察対象面における目標設定を特定する例が示されている。つまり、観察対象面を変更する度に、複数の画像データを取得してその観察対象面における目標設定を特定する例が示されている。これに対して、本実施形態では、予め観察対象面の位置(観察深さ)と目標設定の関係を特定し、その後、利用者が指定した観察対象面を観察するときに、予め特定した関係から指定した観察対象面における目標設定を特定する。つまり、観察対象面を変更する度に複数の画像データを取得することなく、任意の観察対象面における目標設定を特定する。
[Third Embodiment]
In the first embodiment and the second embodiment, when the user observes the observation target surface designated by operating the focusing operation device 60, a plurality of image data are acquired and the observation target surface is observed. An example is shown to identify the goal setting in. That is, an example is shown in which a plurality of image data are acquired each time the observation target surface is changed to specify the target setting on the observation target surface. On the other hand, in the present embodiment, the relationship between the position (observation depth) of the observation target surface and the target setting is specified in advance, and then the relationship specified in advance when observing the observation target surface designated by the user. Specify the target setting on the observation target surface specified from. That is, the target setting on an arbitrary observation target surface is specified without acquiring a plurality of image data each time the observation target surface is changed.
 図12は、本実施形態に係る関係算出処理のフローチャートである。図13は、補正環の設定と観察深さの関係の算出方法の一例を説明するための図である。図14は、本実施形態に係るZ-Series撮影処理のフローチャートである。以下、図12から図14を参照しながら、サンプルSにおける補正環111の設定と観察深さとの関係を算出する処理と、算出した関係を用いてZ-Series撮影を行う処理について説明する。 FIG. 12 is a flowchart of the relationship calculation process according to the present embodiment. FIG. 13 is a diagram for explaining an example of a method of calculating the relationship between the setting of the correction ring and the observation depth. FIG. 14 is a flowchart of the Z-Series shooting process according to the present embodiment. Hereinafter, with reference to FIGS. 12 to 14, a process of calculating the relationship between the setting of the correction ring 111 and the observation depth in the sample S and a process of performing Z-Series imaging using the calculated relationship will be described.
 なお、Z-Series撮影とは、サンプルSの深さ方向(つまり、対物レンズ110の光軸方向)に観察対象面を所定距離ずつ移動しながら二次元画像の取得を繰り返す撮影方法のことであり、サンプルの3次元情報を得るために用いられる。Z-Series撮影は、Zスタック撮影とも呼ばれる。 The Z-Series imaging is an imaging method in which acquisition of a two-dimensional image is repeated while moving the observation target surface by a predetermined distance in the depth direction of the sample S (that is, the optical axis direction of the objective lens 110). , Used to obtain 3D information of the sample. Z-Series photography is also referred to as Z-stack photography.
 利用者の指示によって、図12に示す関係算出処理が開始されると、顕微鏡システム1は、まず、サンプルSとステージの界面に焦点を合わせる(ステップS71)。ここでは、演算装置20は、焦準制御装置13を制御して、焦準制御装置13がサンプルSとステージの界面に焦点を合わせる。このステップは、既知の任意の方法により行われ得る。このときの観察深さ(観察対象面の位置)をD0とし、観察深さの基準とする。 When the relationship calculation process shown in FIG. 12 is started according to the instruction of the user, the microscope system 1 first focuses on the interface between the sample S and the stage (step S71). Here, the arithmetic unit 20 controls the focusing control device 13, and the focusing control device 13 focuses on the interface between the sample S and the stage. This step can be performed by any known method. The observation depth (position of the observation target surface) at this time is set to D0 and is used as the reference for the observation depth.
 その後、顕微鏡システム1は、観察深さ範囲の指定を受け付ける(ステップS72)。ここでは、利用者が、観察する可能性がある深さ範囲を観察深さの範囲としてキーボード40などの入力装置を用いて入力することで、演算装置20は、観察深さ範囲の指定を受け付ける。なお、観察深さ範囲の指定は、少なくとも観察深さ範囲の両端が指定されればよく、観察深さ範囲の両端に加えてその観察深さ内の任意の深さが指定されてもよい。即ち、ステップS72では、少なくとも2つの観察深さが指定される。 After that, the microscope system 1 accepts the designation of the observation depth range (step S72). Here, the arithmetic unit 20 accepts the designation of the observation depth range by inputting the depth range that the user may observe as the observation depth range using an input device such as a keyboard 40. .. The observation depth range may be specified at least at both ends of the observation depth range, and any depth within the observation depth may be specified in addition to both ends of the observation depth range. That is, in step S72, at least two observation depths are specified.
 次に、顕微鏡システム1は、観察対象面を最初の観察深さへ移動する(ステップS73)。ここでは、演算装置20は、焦準制御装置13を制御して、焦準制御装置13は、例えば、観察対象面をステップS72で指定された観察深さ範囲の一端である観察深さD1へ移動する。 Next, the microscope system 1 moves the observation target surface to the first observation depth (step S73). Here, the arithmetic unit 20 controls the focusing control device 13, and the focusing control device 13 transfers the observation target surface to, for example, the observation depth D1 which is one end of the observation depth range specified in step S72. Moving.
 さらに、顕微鏡システム1は、ステップS74からステップS81において、観察深さD1の観察対象面における球面収差が補正される補正環111の設定を特定する。これらの処理は、図4のステップS1からステップS8の処理と同様である。 Further, the microscope system 1 specifies the setting of the correction ring 111 in which the spherical aberration on the observation target surface at the observation depth D1 is corrected in steps S74 to S81. These processes are the same as the processes of steps S1 to S8 of FIG.
 その後、顕微鏡システム1は、ステップS72で指定された全ての観察深さで設定が特定されたか否かを判定する(ステップS82)。そして、顕微鏡システム1は、設定が特定されていない観察深さがある場合には(ステップS82NO)、観察対象面を次の観察深さへ移動し(ステップS83)、移動後の観察対象面に対して、ステップS74からステップS82の処理を繰り返す。ここでは、演算装置20は、焦準制御装置13を制御して、焦準制御装置13は、例えば、観察対象面をステップS72で指定された観察深さ範囲の他端である観察深さD2へ移動する。 After that, the microscope system 1 determines whether or not the setting is specified at all the observation depths specified in step S72 (step S82). Then, when there is an observation depth for which the setting is not specified (step S82NO), the microscope system 1 moves the observation target surface to the next observation depth (step S83), and moves to the observation target surface after the movement. On the other hand, the processes of steps S74 to S82 are repeated. Here, the arithmetic unit 20 controls the focusing control device 13, and the focusing control device 13 determines, for example, the observation depth D2 at which the observation target surface is the other end of the observation depth range specified in step S72. Move to.
 ステップS72で指定された全ての観察深さで設定が特定されると(ステップS83YES)、顕微鏡システム1は、球面収差が補正される設定と観察深さとの関係を算出する(ステップS84)。ここでは、演算装置20は、ステップ82で特定された観察深さD1における目標設定である角度θ1と、ステップ82で特定された観察深さD2における目標設定である角度θ2と、を用いて球面収差が補正される設定と観察深さとの関係を算出する。具体的には、図13に示すように、観察深さD1と角度θ1によって特定されるポイントP1と、観察深さD2と角度θ2によって特定されるポイントP2と、を線形補間することで、破線で示す関係を算出する。なお、ここでは、2点の線形補間により、関係を算出する例を示したが、3点以上の情報から関係を算出してもよく、その場合、例えば、最小二乗法などの関数近似を用いて関係を算出してもよい。 When the setting is specified for all the observation depths specified in step S72 (step S83YES), the microscope system 1 calculates the relationship between the setting for correcting spherical aberration and the observation depth (step S84). Here, the arithmetic unit 20 uses the angle θ1 which is the target setting at the observation depth D1 specified in step 82 and the angle θ2 which is the target setting at the observation depth D2 specified in step 82 to form a spherical surface. The relationship between the setting at which the aberration is corrected and the observation depth is calculated. Specifically, as shown in FIG. 13, a broken line is formed by linearly interpolating the point P1 specified by the observation depth D1 and the angle θ1 and the point P2 specified by the observation depth D2 and the angle θ2. Calculate the relationship shown by. Here, an example of calculating the relationship by linear interpolation of two points is shown, but the relationship may be calculated from information of three or more points. In that case, for example, a function approximation such as the least squares method is used. The relationship may be calculated.
 最後に、顕微鏡システム1は、算出された関係を記憶して(ステップS85)、図12に示す関係算出処理を終了する。ここでは、演算装置20は、算出された関係をメモリ22に記憶する。 Finally, the microscope system 1 stores the calculated relationship (step S85) and ends the relationship calculation process shown in FIG. Here, the arithmetic unit 20 stores the calculated relationship in the memory 22.
 関係算出処理が終了した後、サンプルSに対する図14に示すZ-Series撮影処理が開始されると、顕微鏡システム1は、まず、Z-Series撮影で撮影すべき観察深さ範囲の指定を受け付ける(ステップS91)。ここでは、利用者が、例えば、観察深さの範囲をキーボード40などの入力装置を用いて入力することで、演算装置20は、観察深さ範囲との指定を受け付ける。演算装置20は、さらに、観察深さ範囲内で画像データを取得する複数の観察深さを決定する。複数の観察深さは、例えば、所定間隔で分布してもよく、所定間隔は、ステップS91において観察深さ範囲とともに利用者によって指定されてもよい。 When the Z-Series imaging process shown in FIG. 14 is started for the sample S after the relationship calculation process is completed, the microscope system 1 first accepts the designation of the observation depth range to be captured in the Z-Series imaging ( Step S91). Here, for example, the user inputs the range of the observation depth using an input device such as the keyboard 40, and the arithmetic unit 20 accepts the designation as the observation depth range. The arithmetic unit 20 further determines a plurality of observation depths for acquiring image data within the observation depth range. The plurality of observation depths may be distributed, for example, at predetermined intervals, and the predetermined intervals may be specified by the user together with the observation depth range in step S91.
 観察深さ範囲が指定されて、Z-Series撮影の対象となる複数の観察深さが決定されると、顕微鏡システム1は、観察対象面を最初の観察深さへ移動する(ステップS92)。ここでは、演算装置20は、焦準制御装置13を制御して、焦準制御装置13は、例えば、観察対象面をステップS91で決定された一つの観察深さへ移動する。 When the observation depth range is specified and a plurality of observation depths to be photographed by Z-Series are determined, the microscope system 1 moves the observation target surface to the first observation depth (step S92). Here, the arithmetic unit 20 controls the focusing control device 13, and the focusing control device 13 moves, for example, the observation target surface to one observation depth determined in step S91.
 さらに、顕微鏡システム1は、補正環111の設定を変更する(ステップS93)。ここでは、演算装置20は、まず、図12に示す関係算出処理によって予め算出された関係をメモリ22から読み出し、観察対象面の観察深さに対応する目標設定(補正環111の角度)を算出する。その後、補正環制御装置14は、演算装置20の指示に従って、算出した目標設定に、補正環111の設定を変更する。これにより、球面収差が補正される。 Further, the microscope system 1 changes the setting of the correction ring 111 (step S93). Here, the arithmetic unit 20 first reads the relationship calculated in advance by the relationship calculation process shown in FIG. 12 from the memory 22, and calculates the target setting (angle of the correction ring 111) corresponding to the observation depth of the observation target surface. do. After that, the correction ring control device 14 changes the setting of the correction ring 111 to the calculated target setting according to the instruction of the arithmetic unit 20. As a result, spherical aberration is corrected.
 その後、顕微鏡システム1は、画像データを取得する(ステップS94)。これにより、球面収差が補正された画像を得ることができる。さらに、顕微鏡システム1は、ステップS91で決定した全ての観察深さで画像データを取得済みか否かを判定し(ステップS95)、取得済みでなければ、観察対象面を次の観察深さへ移動する(ステップS96)。そして、すべての観察深さで画像データの取得が終了すると(ステップS95YES)、顕微鏡システム1は、図14に示すZ-Series撮影処理を終了する。 After that, the microscope system 1 acquires image data (step S94). As a result, it is possible to obtain an image in which spherical aberration is corrected. Further, the microscope system 1 determines whether or not the image data has been acquired at all the observation depths determined in step S91 (step S95), and if not, moves the observation target surface to the next observation depth. Move (step S96). Then, when the acquisition of the image data is completed at all the observation depths (step S95YES), the microscope system 1 ends the Z-Series imaging process shown in FIG.
 本実施形態に係る顕微鏡システムによっても、顕微鏡システム1と同様の効果を得ることができる。また、本実施形態では、予め観察深さと目標設定の関係が算出されるため、観察深さを変更する度に目標設定を探索するために複数の画像データを取得する必要がない。また、察深さと目標設定の関係も、比較的少ない数の観察深さにおける情報から補間や関数近似を用いて算出される。従って、目標設定を探索するための予備的な撮影の回数を大幅に削減することができる。このため、サンプルSへのダメージを抑制することができる。また、球面収差を良好に補正しながらZ-Series撮影処理を比較的短時間で終了することが可能となる。 The same effect as that of the microscope system 1 can be obtained by the microscope system according to the present embodiment. Further, in the present embodiment, since the relationship between the observation depth and the target setting is calculated in advance, it is not necessary to acquire a plurality of image data in order to search for the target setting each time the observation depth is changed. In addition, the relationship between the depth of observation and the target setting is also calculated using interpolation and function approximation from information at a relatively small number of observation depths. Therefore, the number of preliminary shots for exploring the target setting can be significantly reduced. Therefore, damage to the sample S can be suppressed. Further, it is possible to finish the Z-Series imaging process in a relatively short time while satisfactorily correcting the spherical aberration.
[第4の実施形態]
 第3の実施形態では、任意の深さの観察対象面において球面収差が補正された高画質な画像を得るために目標設定と観察深さの関係を算出する例を示したが、目標設定と観察深さの関係は画像データ取得以外の目的で算出されてもよい。本実施形態では、サンプルSの屈折率を表示するために目標設定と観察深さの関係を利用する点が、第3の実施形態とは異なっている。
[Fourth Embodiment]
In the third embodiment, an example of calculating the relationship between the target setting and the observation depth in order to obtain a high-quality image in which the spherical aberration is corrected on the observation target surface at an arbitrary depth is shown. The relationship of observation depth may be calculated for purposes other than image data acquisition. The present embodiment is different from the third embodiment in that the relationship between the target setting and the observation depth is used to display the refractive index of the sample S.
 図15は、本実施形態に係る屈折率表示処理のフローチャートである。図16及び図17は、屈折率の表示方法の一例を説明するための図である。以下、図15から図17を参照しながら、目標設定と観察深さの関係を用いて屈折率を表示する処理について説明する。 FIG. 15 is a flowchart of the refractive index display process according to the present embodiment. 16 and 17 are diagrams for explaining an example of a method of displaying the refractive index. Hereinafter, a process of displaying the refractive index using the relationship between the target setting and the observation depth will be described with reference to FIGS. 15 to 17.
 図12に示す関係算出処理が行われた後に、利用者の指示によって、図15に示す屈折率表示処理が開始されると、顕微鏡システム1は、まず、図12に示す関係算出処理で算出された関係を表示する(ステップS101)。ここでは、演算装置20は、例えば、図16に示すグラフを表示装置30に表示させる。なお、図16に示すグラフは、図12に示す関係算出処理において、観察深さD1、D2、D3のそれぞれで複数の画像データを取得してそれぞれの深さにおける目標設定を算出した場合の例である。 When the refractive index display process shown in FIG. 15 is started according to the instruction of the user after the relationship calculation process shown in FIG. 12 is performed, the microscope system 1 is first calculated by the relationship calculation process shown in FIG. The relationship is displayed (step S101). Here, the arithmetic unit 20 causes the display device 30 to display the graph shown in FIG. 16, for example. The graph shown in FIG. 16 is an example in the case where a plurality of image data are acquired at each of the observation depths D1, D2, and D3 and the target setting at each depth is calculated in the relationship calculation process shown in FIG. Is.
 その後、顕微鏡システム1は、屈折率を表示すべき観察深さの指定を受け付ける(ステップS102)。ここで、例えば、図17に示すように、サンプルS中の屈折率を知りたい部分に対応する観察深さを、利用者が表示装置30に表示されたグラフ上でカーソルC等を用いて選択することによって、演算装置20が指定された観察深さを検出する。 After that, the microscope system 1 accepts the designation of the observation depth at which the refractive index should be displayed (step S102). Here, for example, as shown in FIG. 17, the user selects the observation depth corresponding to the portion of the sample S for which the refractive index is desired by using the cursor C or the like on the graph displayed on the display device 30. By doing so, the arithmetic unit 20 detects the specified observation depth.
 観察深さが指定されると、顕微鏡システム1は、表示装置30に表示されている情報、即ち、観察深さ毎に特定された球面収差が補正される補正環111の設定に基づいて、サンプルSの指定された観察深さにおける屈折率を算出する。具体的には、顕微鏡システム1は、まず、深さ方向の設定変化率を算出し(ステップS103)、さらに、ステップS102で指定された観察深さにおけるサンプルSの屈折率を算出する(ステップS104)。 When the observation depth is specified, the microscope system 1 prepares the sample based on the information displayed on the display device 30, that is, the setting of the correction ring 111 for correcting the spherical aberration specified for each observation depth. Calculate the refractive index of S at the specified observation depth. Specifically, the microscope system 1 first calculates the set change rate in the depth direction (step S103), and further calculates the refractive index of the sample S at the observation depth specified in step S102 (step S104). ).
 なお、特許文献2、3に記載されているように、観察対象面における球面収差量の変化率(=観察対象面の深さ変化量あたりの球面収差量の変化量)は、対物レンズ110とサンプルSの間の媒質(例えば、空気や浸液)の屈折率と、その観察対象面におけるサンプルSの屈折率に依存している。また、対物レンズが定まると補正環の設定と球面収差量の関係は既知である。ステップS104では、演算装置20は、ステップS103で算出された設定変化率から球面収差量の変化率を算出し、さらに、球面収差量の変化率と媒質の屈折率に基づいて、観察深さにおけるサンプルSの屈折率を算出する。 As described in Patent Documents 2 and 3, the rate of change in the amount of spherical aberration on the surface to be observed (= the amount of change in the amount of spherical aberration per amount of change in the depth of the surface to be observed) is the same as that of the objective lens 110. It depends on the refractive index of the medium (eg, air or immersion liquid) between the samples S and the refractive index of the sample S on the surface to be observed. Further, once the objective lens is determined, the relationship between the setting of the correction ring and the amount of spherical aberration is known. In step S104, the arithmetic unit 20 calculates the rate of change in the amount of spherical aberration from the set rate of change calculated in step S103, and further, based on the rate of change in the amount of spherical aberration and the refractive index of the medium, in the observation depth. The refractive index of sample S is calculated.
 屈折率を算出すると、顕微鏡システム1は、算出した屈折率を表示する(ステップS105)。ここでは、表示装置30は、例えば、図17に示すように、演算装置20が算出した屈折率を表示する。なお、図17には、図16に示すグラフ上に、利用者によって指定された観察深さにおけるサンプルSの屈折率が重ねて表示された様子が示されている。 When the refractive index is calculated, the microscope system 1 displays the calculated refractive index (step S105). Here, the display device 30 displays the refractive index calculated by the arithmetic unit 20, for example, as shown in FIG. Note that FIG. 17 shows how the refractive index of the sample S at the observation depth specified by the user is superimposed and displayed on the graph shown in FIG.
 本実施形態に係る顕微鏡システムによれば、利用者は、サンプルSの構造によらず、サンプルSの任意の面における屈折率を容易に知ることができる。 According to the microscope system according to the present embodiment, the user can easily know the refractive index of the sample S on any surface regardless of the structure of the sample S.
 上述した実施形態は、発明の理解を容易にするための具体例を示したものであり、本発明の実施形態はこれに限定されるものではない。顕微鏡システム、設定探索方法、及び、プログラムは、特許請求の範囲の記載を逸脱しない範囲において、さまざまな変形、変更が可能である。 The above-described embodiment shows a specific example for facilitating the understanding of the invention, and the embodiment of the present invention is not limited thereto. The microscope system, the setting search method, and the program can be variously modified and changed without departing from the description of the claims.
 例えば、上述した実施形態では、コントラスト値の算出において評価式として式(1)を使用したが、評価式は画素間での画素値の差分を用いてコントラスト値の算出するものであればよく、例えば、下式を用いてもよい。
Figure JPOXMLDOC01-appb-M000002
For example, in the above-described embodiment, the formula (1) is used as the evaluation formula in the calculation of the contrast value, but the evaluation formula may be any one that calculates the contrast value using the difference between the pixel values between the pixels. For example, the following equation may be used.
Figure JPOXMLDOC01-appb-M000002
 式(1)の代わりに、異なる複数のシフト量(例えば、n=1、2、3、5、10など)を使用する式(2)を用いることで、広い空間周波数領域でコントラストを安定して評価することができる。このため、画像に含まれる周波数成分によらず、即ち、サンプルや光学系の倍率などによらず、画像のコントラストの評価を安定して行うことができる。 By using the equation (2) that uses a plurality of different shift amounts (for example, n = 1, 2, 3, 5, 10, etc.) instead of the equation (1), the contrast is stabilized in a wide spatial frequency domain. Can be evaluated. Therefore, the contrast of the image can be stably evaluated regardless of the frequency component included in the image, that is, regardless of the magnification of the sample or the optical system.
 また、上述した実施形態では、第1の位置の画素データを、コントラスト値を算出する計算から除外する例を示したが、例えば、図18に示すように、飽和画素の画素データを加工してコントラスト値の算出に使用してもよい。即ち、一ライン毎に飽和画素の有無を判定し、飽和画素が存在する場合には、そのライン内における飽和画素の周囲の画素の画素データから画素値についての近似曲線を算出する。そして、画像のビット数に起因する階調制限がなかった場合の飽和画素の画素データの画素値を、算出した近似曲線に基づいて推定する。このように、コントラスト計算において画像データに含まれる画素値をそのまま使用せずに加工して使用することで、飽和に起因するコントラスト評価の精度低下を抑制することが可能となる。 Further, in the above-described embodiment, an example of excluding the pixel data at the first position from the calculation for calculating the contrast value has been shown. For example, as shown in FIG. 18, the pixel data of saturated pixels is processed. It may be used to calculate the contrast value. That is, the presence or absence of saturated pixels is determined for each line, and if saturated pixels are present, an approximate curve for the pixel value is calculated from the pixel data of the pixels around the saturated pixels in the line. Then, the pixel value of the pixel data of the saturated pixels when there is no gradation limitation due to the number of bits of the image is estimated based on the calculated approximate curve. In this way, by processing and using the pixel values included in the image data in the contrast calculation without using them as they are, it is possible to suppress a decrease in the accuracy of the contrast evaluation due to saturation.
1   顕微鏡システム
10  顕微鏡制御装置
11  光源制御装置
12  ズーム制御装置
13  焦準制御装置
14  補正環制御装置
20  演算装置
21  プロセッサ
22  メモリ
26  可搬記録媒体
30  表示装置
40  キーボード
50  補正環操作装置
60  焦準操作装置
100 顕微鏡
101 レーザー
102 走査ユニット
107 光検出器
108 A/D変換器
109 焦準装置
110 対物レンズ
111 補正環
 
1 Microscope system 10 Microscope control device 11 Light source control device 12 Zoom control device 13 Focus control device 14 Correction ring control device 20 Arithmetic device 21 Processor 22 Memory 26 Portable recording medium 30 Display device 40 Keyboard 50 Correction ring operation device 60 Focusing Operation device 100 Microscope 101 Laser 102 Scanning unit 107 Light detector 108 A / D converter 109 Focusing device 110 Objective lens 111 Correction ring

Claims (11)

  1.  球面収差補正装置を有し、前記球面収差補正装置の設定が異なる複数の状態の各々で画像データを取得することで複数の画像データを取得する顕微鏡装置と、
     前記複数の画像データの各々のコントラスト値を含む複数のコントラスト値に基づいて球面収差が補正される前記球面収差補正装置の設定を特定する演算装置と、を備え、
     前記演算装置は、
      前記複数の画像データに含まれる画素値が飽和した画素データの位置を第1の位置として特定し、
      前記複数の画像データの各々に含まれる前記第1の位置の画素データを除いて、前記複数のコントラスト値の各々を算出する
    ことを特徴とする顕微鏡システム。
    A microscope device having a spherical aberration correction device and acquiring a plurality of image data by acquiring image data in each of a plurality of states in which the settings of the spherical aberration correction device are different.
    A computing device for specifying the setting of the spherical aberration correction device, which corrects spherical aberration based on a plurality of contrast values including each contrast value of the plurality of image data, is provided.
    The arithmetic unit
    The position of the pixel data in which the pixel values included in the plurality of image data are saturated is specified as the first position.
    A microscope system characterized in that each of the plurality of contrast values is calculated excluding the pixel data of the first position included in each of the plurality of image data.
  2.  請求項1に記載の顕微鏡システムにおいて、
     前記演算装置は、
      前記複数の画像データに含まれる画素値が閾値未満の画素データの位置を第2の位置として特定し、
      前記複数の画像データの各々に含まれる前記第1の位置の画素データと前記第2の位置の画素データを除いて、前記複数のコントラスト値の各々を算出する
    ことを特徴とする顕微鏡システム。
    In the microscope system according to claim 1,
    The arithmetic unit
    The position of the pixel data whose pixel value included in the plurality of image data is less than the threshold value is specified as the second position.
    A microscope system characterized in that each of the plurality of contrast values is calculated excluding the pixel data at the first position and the pixel data at the second position included in each of the plurality of image data.
  3.  請求項1又は請求項2に記載の顕微鏡システムにおいて、
     前記顕微鏡装置は、さらに、前記球面収差補正装置の設定を制御する補正制御装置を備え、
     前記補正制御装置は、前記球面収差補正装置の設定を前記演算装置によって特定された設定に変更する
    ことを特徴とする顕微鏡システム。
    In the microscope system according to claim 1 or 2.
    The microscope device further includes a correction control device that controls the setting of the spherical aberration correction device.
    The correction control device is a microscope system characterized in that the setting of the spherical aberration correction device is changed to the setting specified by the arithmetic unit.
  4.  請求項1又は請求項2に記載の顕微鏡システムにおいて、
     前記演算装置は、観察深さ毎に特定された球面収差が補正される前記球面収差補正装置の設定に基づいてサンプルの屈折率を算出する
    ことを特徴とする顕微鏡システム。
    In the microscope system according to claim 1 or 2.
    The calculation device is a microscope system characterized in that the refractive index of a sample is calculated based on the setting of the spherical aberration correction device in which the spherical aberration specified for each observation depth is corrected.
  5.  請求項4に記載の顕微鏡システムにおいて、さらに、
     表示装置を含み、
     前記表示装置は、前記演算装置が算出した前記サンプルの屈折率を表示する
    ことを特徴とする顕微鏡システム。
    In the microscope system according to claim 4, further
    Including display device
    The display device is a microscope system characterized by displaying the refractive index of the sample calculated by the arithmetic unit.
  6.  請求項1又は請求項2に記載の顕微鏡システムにおいて、
     前記顕微鏡装置は、さらに、
      光源と光源制御装置との第1組み合わせと、光検出器と検出感度制御装置との第2組み合わせの、少なくとも一方を備え、
     前記第1組み合わせを備える場合に、前記光源制御装置は、前記複数の画像データに含まれる画素値が飽和した画素データの割合に応じて前記光源の出力を制御し、
     前記第2組み合わせを備える場合に、前記検出感度制御装置は、前記複数の画像データに含まれる画素値が飽和した画素データの割合に応じて前記光検出器の感度を制御する
    ことを特徴とする顕微鏡システム。
    In the microscope system according to claim 1 or 2.
    The microscope device further
    It comprises at least one of a first combination of a light source and a light source control device and a second combination of a photodetector and a detection sensitivity control device.
    When the first combination is provided, the light source control device controls the output of the light source according to the ratio of the pixel data in which the pixel values included in the plurality of image data are saturated.
    When the second combination is provided, the detection sensitivity control device is characterized in that the sensitivity of the photodetector is controlled according to the ratio of the pixel data in which the pixel values included in the plurality of image data are saturated. Microscope system.
  7.  請求項6に記載の顕微鏡システムにおいて、
     前記光源制御装置は、前記第1組み合わせを備え、且つ、前記割合が閾値以上の場合に、前記光源の出力を低下させ、
     前記検出感度制御装置は、前記第2組み合わせを備え、且つ、前記割合が閾値以上の場合に、前記光検出器の感度を低下させる
    ことを特徴とする顕微鏡システム。
    In the microscope system according to claim 6,
    The light source control device includes the first combination, and when the ratio is equal to or greater than a threshold value, the output of the light source is reduced.
    The detection sensitivity control device is a microscope system including the second combination and lowering the sensitivity of the photodetector when the ratio is equal to or higher than a threshold value.
  8.  請求項1又は請求項2に記載の顕微鏡システムにおいて、
     前記演算装置は、画素間での画素値の差分に基づいて画像のコントラストを評価する評価式を用いて、前記複数のコントラスト値を算出する
    ことを特徴とする顕微鏡システム。
    In the microscope system according to claim 1 or 2.
    The arithmetic apparatus is a microscope system characterized in that a plurality of contrast values are calculated using an evaluation formula that evaluates the contrast of an image based on the difference in pixel values between pixels.
  9.  請求項1又は請求項2に記載の顕微鏡システムにおいて、
     前記顕微鏡装置は、さらに、対物レンズを含み、
     前記球面収差補正装置は、前記対物レンズを構成するレンズの一部を光軸方向に動かす補正環である
    ことを特徴とする顕微鏡システム。
    In the microscope system according to claim 1 or 2.
    The microscope device further includes an objective lens.
    The spherical aberration correction device is a microscope system characterized by being a correction ring that moves a part of a lens constituting the objective lens in the optical axis direction.
  10.  球面収差が補正される設定を探索する設定探索方法であって、
     球面収差補正装置の設定が異なる複数の状態の各々で画像データを取得することで複数の画像データを取得し、
     前記複数の画像データに含まれる画素値が飽和した画素データの位置を第1の位置として特定し、
     前記複数の画像データの各々に含まれる前記第1の位置の画素データを除いて、前記複数の画像データの各々のコントラスト値を算出し、
     前記複数の画像データの各々のコントラスト値を含む複数のコントラスト値に基づいて球面収差が補正される前記球面収差補正装置の設定を特定する
    ことを特徴とする設定探索方法。
    It is a setting search method for searching for a setting in which spherical aberration is corrected.
    A plurality of image data are acquired by acquiring image data in each of a plurality of states having different settings of the spherical aberration corrector.
    The position of the pixel data in which the pixel values included in the plurality of image data are saturated is specified as the first position.
    The contrast value of each of the plurality of image data is calculated by excluding the pixel data of the first position included in each of the plurality of image data.
    A setting search method for specifying a setting of the spherical aberration correction device in which spherical aberration is corrected based on a plurality of contrast values including each contrast value of the plurality of image data.
  11.  コンピュータに、
     球面収差補正装置の設定が異なる状態で取得した複数の画像データに含まれる画素値が飽和した画素データの位置を第1の位置として特定し、
     前記複数の画像データの各々に含まれる前記第1の位置の画素データを除いて、前記複数の画像データの各々のコントラスト値を算出し、
     前記複数の画像データの各々のコントラスト値を含む複数のコントラスト値に基づいて球面収差が補正される前記球面収差補正装置の設定を特定する、
    処理を実行させることを特徴とするプログラム。
     
    On the computer
    The position of the pixel data in which the pixel values included in the plurality of image data acquired with different settings of the spherical aberration correction device are saturated is specified as the first position.
    The contrast value of each of the plurality of image data is calculated by excluding the pixel data of the first position included in each of the plurality of image data.
    The setting of the spherical aberration correction device for which spherical aberration is corrected based on a plurality of contrast values including the contrast values of each of the plurality of image data is specified.
    A program characterized by executing processing.
PCT/JP2020/005412 2020-02-12 2020-02-12 Microsope system, setting search method, and program WO2021161432A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021577774A JP7369801B2 (en) 2020-02-12 2020-02-12 Microscope system, setting search method, and program
PCT/JP2020/005412 WO2021161432A1 (en) 2020-02-12 2020-02-12 Microsope system, setting search method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/005412 WO2021161432A1 (en) 2020-02-12 2020-02-12 Microsope system, setting search method, and program

Publications (1)

Publication Number Publication Date
WO2021161432A1 true WO2021161432A1 (en) 2021-08-19

Family

ID=77291450

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/005412 WO2021161432A1 (en) 2020-02-12 2020-02-12 Microsope system, setting search method, and program

Country Status (2)

Country Link
JP (1) JP7369801B2 (en)
WO (1) WO2021161432A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008112059A (en) * 2006-10-31 2008-05-15 Olympus Corp Scanning type laser microscope, its adjustment method and program
JP2015105964A (en) * 2013-11-28 2015-06-08 オリンパス株式会社 Microscope system
JP2016114796A (en) * 2014-12-15 2016-06-23 オリンパス株式会社 Microscope system, function calculation method and program
US20170003489A1 (en) * 2015-07-03 2017-01-05 Carl Zeiss Microscopy Gmbh Optics device with an optics module that comprises at least one optical element
JP2017026664A (en) * 2015-07-16 2017-02-02 オリンパス株式会社 Microscope system, calculation method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008112059A (en) * 2006-10-31 2008-05-15 Olympus Corp Scanning type laser microscope, its adjustment method and program
JP2015105964A (en) * 2013-11-28 2015-06-08 オリンパス株式会社 Microscope system
JP2016114796A (en) * 2014-12-15 2016-06-23 オリンパス株式会社 Microscope system, function calculation method and program
US20170003489A1 (en) * 2015-07-03 2017-01-05 Carl Zeiss Microscopy Gmbh Optics device with an optics module that comprises at least one optical element
JP2017026664A (en) * 2015-07-16 2017-02-02 オリンパス株式会社 Microscope system, calculation method and program

Also Published As

Publication number Publication date
JPWO2021161432A1 (en) 2021-08-19
JP7369801B2 (en) 2023-10-26

Similar Documents

Publication Publication Date Title
EP3035104B1 (en) Microscope system and setting value calculation method
JP6146265B2 (en) Microscope system and autofocus method
JP6555811B2 (en) Microscope system, identification method, and program
US8363099B2 (en) Microscope system and method of operation thereof
JP6137847B2 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
JP5814709B2 (en) Time-lapse observation method and time-lapse observation apparatus used therefor
JP5911296B2 (en) Image processing apparatus, imaging apparatus, microscope system, image processing method, and image processing program
JP5374119B2 (en) Distance information acquisition device, imaging device, and program
JP6552041B2 (en) Microscope system, refractive index calculation method, and program
US7869706B2 (en) Shooting apparatus for a microscope
US20180338079A1 (en) Microscope system, control method, and computer readable medium
JP6434296B2 (en) Microscope system, set value calculation method, and program
JP6270388B2 (en) Imaging apparatus, microscope system, imaging method, and imaging program
US11215808B2 (en) Microscope parameter setting method and observation method recognizing the shape of a cell
WO2021161432A1 (en) Microsope system, setting search method, and program
JP6422761B2 (en) Microscope system and method for calculating relationship between Z position and set value of correction device
JP6423261B2 (en) Microscope system, function calculation method, and program
JP2019070737A (en) Observation device, focusing control method, and program
WO2020241868A1 (en) Adjustment method for optical device, adjustment assistance method, optical system, and optical device
US20120218460A1 (en) Automatic focusing method for an optical instrument for magnified viewing of an object
JP6312410B2 (en) Alignment apparatus, microscope system, alignment method, and alignment program
JP2010121955A (en) Height information acquisition device, height information acquisition method, and program
JP2015210396A (en) Aligment device, microscope system, alignment method and alignment program
JP6534294B2 (en) Imaging apparatus and method, and imaging control program
JP2020202748A (en) Photographing processing apparatus, control method of photographing processing apparatus, and photographing processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20919307

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021577774

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20919307

Country of ref document: EP

Kind code of ref document: A1