WO2016157458A1 - Measurement apparatus, measurement system, signal string processing method, and program - Google Patents

Measurement apparatus, measurement system, signal string processing method, and program Download PDF

Info

Publication number
WO2016157458A1
WO2016157458A1 PCT/JP2015/060280 JP2015060280W WO2016157458A1 WO 2016157458 A1 WO2016157458 A1 WO 2016157458A1 JP 2015060280 W JP2015060280 W JP 2015060280W WO 2016157458 A1 WO2016157458 A1 WO 2016157458A1
Authority
WO
WIPO (PCT)
Prior art keywords
fluorescence
light
signal
signal sequence
pixel
Prior art date
Application number
PCT/JP2015/060280
Other languages
French (fr)
Japanese (ja)
Inventor
浜島 宗樹
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Priority to PCT/JP2015/060280 priority Critical patent/WO2016157458A1/en
Publication of WO2016157458A1 publication Critical patent/WO2016157458A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence

Definitions

  • the present invention relates to a measuring apparatus, a measuring system, a signal sequence processing method, and a program.
  • Patent Document 1 discloses a microscope apparatus including a dichroic mirror and an in-focus signal generator for providing an autofocus function (hereinafter referred to as “AF function”).
  • AF function an autofocus function
  • the two fluorescent lights having different wavelengths have different amounts of chromatic aberration, so that focusing is required every time an image is acquired.
  • the time required for focusing using the AF function is increased, which greatly affects the throughput of image acquisition in the imaging apparatus.
  • a light source device for irradiating an irradiated body with the first excitation light and the second excitation light, and a microlens array including a plurality of microlenses arranged two-dimensionally A sensor in which light receiving elements are two-dimensionally arranged, and the first fluorescence emitted when the first excitation light is irradiated onto the irradiated body and the second excitation light that is irradiated with the first excited light.
  • the first result is obtained from the sensor that receives the second fluorescence emitted when the light is irradiated through the microlens array and the signal sequence of the first pixel signal corresponding to the focal point of the first fluorescence.
  • a control unit that generates and generates a second result from the signal sequence of the second pixel signal corresponding to the focal point of the second fluorescence.
  • the measurement device described above a reaction device that reacts the irradiated object supported by a support member with a sample, and a transport device that transports the support member to the measurement device.
  • a measurement system is provided.
  • the first excitation light and the second excitation light are applied to the irradiated object by the step of disposing the support member on which the plurality of irradiated objects are disposed on the stage and the light source device.
  • a program for causing an information processing apparatus including at least a calculation unit and a storage unit to process a signal sequence of light received through a microlens array, the signal sequence of the light Is a set of pixel signals at a plurality of different focal positions, and the pixel signal acquired during the time zone in which the first fluorescence will be emitted and the time in which the second fluorescence will be emitted.
  • the figure explaining the extraction process of a pixel signal in case a microlens array is arrange
  • positioned at the imaging surface (Z 0) of an objective lens.
  • the figure explaining the extraction process of a pixel signal in case the image plane of an objective lens exists in the position (Z h1) shifted
  • the figure explaining the amount of chromatic aberration of fluorescence The figure which shows an example of the component of a control apparatus. An example of the flowchart at the time of measuring a to-be-irradiated body with the measuring apparatus which concerns on 1st Embodiment. An example of the detailed content of the flowchart of FIG. The figure explaining the procedure which determines a reference focus position. The figure explaining the procedure which determines a reference focus position. An example which acquires the signal sequence of the pixel signal corresponding to the focal point of the 1st fluorescence. An example which acquires the signal sequence of the pixel signal corresponding to the focal point of the 2nd fluorescence.
  • the other example of the flowchart at the time of measuring a to-be-irradiated body with a measuring apparatus The other example of the flowchart at the time of measuring a to-be-irradiated body with a measuring apparatus.
  • the figure explaining the procedure which determines a reference focus position. An example of acquiring a signal sequence of pixel signals corresponding to the focal point of the first fluorescence and a signal sequence of pixel signals corresponding to the focal point of the second fluorescence. It is another example of a structure of the measuring apparatus which concerns on 2nd Embodiment.
  • FIG. 1 An example of the flowchart at the time of measuring a to-be-irradiated body with the measuring apparatus which concerns on 3rd Embodiment.
  • the figure explaining the procedure which determines a reference focus position.
  • the figure explaining the relationship between the inclination of a to-be-irradiated body, and a focus position.
  • Another example of three points for determining the reference focal position The figure which shows the structure of the measurement system which concerns on 5th Embodiment.
  • an XYZ orthogonal coordinate system is set, and the positional relationship of each member will be described with reference to this XYZ orthogonal coordinate system.
  • the predetermined direction in the horizontal plane is the X-axis direction
  • the direction orthogonal to the X-axis direction in the horizontal plane is the Y-axis direction
  • the direction orthogonal to each of the X-axis direction and the Y-axis direction is the Z-axis direction.
  • the rotation (inclination) directions around the X axis, Y axis, and Z axis are the ⁇ X, ⁇ Y, and ⁇ Z directions, respectively.
  • FIG. 1 is a schematic configuration diagram illustrating an example of a measurement apparatus.
  • the measuring device is an optical microscope.
  • the measuring device may be another optical device or an imaging device.
  • the measurement device 10 includes a measurement device main body 20 that observes an object to be irradiated (measurement target) 1, a control device 30 that controls the operation of the measurement device main body 20, and a display device 40 that is connected to the control device 30.
  • the control device 30 includes a computer system.
  • the computer system includes at least a processor, a memory, and a storage device.
  • the memory is a volatile memory
  • the storage device is a nonvolatile storage such as a hard disk.
  • a computer system may include input devices such as a keyboard and a pointing device (eg, a mouse).
  • the display device 40 includes a flat panel display such as a liquid crystal display.
  • the measuring device main body 20 includes a light source device 2, an optical system 3, a stage 4, and a sensor 5.
  • the measuring apparatus main body 20 includes a body (not shown). Each of the light source device 2, the optical system 3, the stage 4, and the sensor 5 is supported by the body.
  • the light source device 2 can emit light having a plurality of different wavelengths.
  • the light source device 2 can emit excitation light to generate fluorescence from the irradiated object 1.
  • the light source device 2 can emit a plurality of excitation light beams having different wavelengths for generating fluorescence from the irradiated object 1 and reference light for obtaining reflected light from the irradiated object 1.
  • the light source unit 2 the light of wavelength ⁇ 1 as a first excitation light, light of the wavelength ⁇ 2 of the second excitation light, and the light of the wavelength lambda R of the reference light can be emitted.
  • the light source unit 2 the light of wavelength ⁇ 1 as a first excitation light, light of the wavelength ⁇ 2 of the second excitation light, and, in the wavelength lambda R of the reference light light, from the control unit 30 It is possible to selectively switch injection based on the signal.
  • the light source device 2 includes a light of wavelength ⁇ 1 as a first excitation light, are both capable of emitting a light of the wavelength .lambda.2, the wavelength lambda R of the reference light and the light of the second excitation light .
  • the wavelength ⁇ 1 and the wavelength ⁇ 2 and the wavelength lambda R is a different wavelength from each other.
  • the light source device 2 may emit third excitation light corresponding to the third fluorescence. Furthermore, the light source device 2 may emit the fourth excitation light corresponding to the fourth fluorescence.
  • a support member 60 in which a plurality of irradiated objects 1 are arranged is arranged on the stage 4.
  • the support member 60 is a plate.
  • the support member 60 is a plate-like member.
  • the irradiated object 1 is a biochip.
  • the biochip is sometimes called a microarray, a microarray chip, a biomolecule array, a biosensor, or the like.
  • FIG. 2 is an example of the support member 60 and the irradiated object 1.
  • the support member 60 is a glass plate. A plurality of biochips are fixed in a matrix on the glass plate. As an example, the biochip is bonded to the glass plate with an adhesive.
  • the biochip has a plurality of spots.
  • the biochip has a plurality of spots arranged in a matrix.
  • a biomolecule probe
  • different biomolecules are fixed to a plurality of spots on the biochip.
  • Each spot is set with an address so that the spot can be identified.
  • the address information is stored in the storage device of the control device 30, for example.
  • the irradiated object 1 emits fluorescence when irradiated with excitation light.
  • the irradiated object 1 generates fluorescence by irradiating the fluorescent dye bonded to the biomolecule of the irradiated object 1 with excitation light.
  • the irradiated object 1 generates the first fluorescence (wavelength: ⁇ 1 ′) when irradiated with the first excitation light (wavelength: ⁇ 1).
  • the irradiated object 1 generates second fluorescence (wavelength: ⁇ 2 ′) when irradiated with second excitation light (wavelength: ⁇ 2).
  • the wavelength ⁇ R of the reference light is not a wavelength for generating fluorescence from the irradiated object 1.
  • the wavelength ⁇ R of the reference light is not a wavelength that excites the fluorescent dye that is bound to the biomolecule of the irradiated object 1.
  • the wavelength lambda R of the reference light different than either of ⁇ 2' wavelengths ⁇ 1' and second fluorescence wavelength of the first fluorescent.
  • the wavelength ⁇ R of the reference light is preferably a wavelength between ⁇ 1 ′ of the first fluorescence wavelength and ⁇ 2 ′ of the second fluorescence wavelength. It is preferable that the wavelength ⁇ R of the reference light is close to both the first fluorescence and the second fluorescence, and does not overlap so as not to affect the first fluorescence and the second fluorescence.
  • Stage 4 supports support member 60.
  • the stage 4 is movable while supporting the support member 60.
  • the stage 4 is movable in each of the X-axis direction, the Y-axis direction, and the Z-axis direction while supporting the support member 60.
  • the support member 60 is supported by the stage 4 so that the surface of the irradiated object 1 (the surface on which the biomolecule is fixed) faces the first objective lens 16 of the optical system 3.
  • the stage 4 and the control device 30 are connected by a control line 51.
  • the control device 30 can move the stage 4 that supports the support member 60 in each of the X-axis direction, the Y-axis direction, and the Z-axis direction.
  • the measurement apparatus 10 replaces the next support member 60 and sequentially performs measurement.
  • the stage 4 may be configured by a stage that can rotate in the ⁇ X, ⁇ Y, and ⁇ Z directions while supporting the support member 60.
  • the optical system 3 includes an irradiation optical system 6, an imaging optical system 7, and a microlens array (MLA) 8.
  • the irradiation optical system 6 includes components for irradiating the irradiated body 1 with light emitted from the light source device 2.
  • the irradiation optical system 6 includes a first lens 11, a brightness stop (AS) 12, a field stop (FS) 13, a second lens 14, a filter block 15, and a first objective lens. 16.
  • the light emitted from the light source device 2 passes through the first lens 11, the brightness stop 12, the field stop 13, and the second lens 14 and enters the filter block 15.
  • FIG. 3 is a schematic diagram showing an example of a filter block.
  • the filter block 15 includes a first filter (first wavelength selection unit) 17 on which light from the light source device 2 enters, a dichroic mirror 18 on which light through the first filter 17 enters, and light from the dichroic mirror 18 An incident second filter (second wavelength selection unit) 19 is provided.
  • the filter block 15 is a fluorescent filter block in which an excitation filter, a dichroic mirror, and an absorption filter are integrally formed.
  • the fluorescent filter block may be called a fluorescent cube, a fluorescent mirror unit, or a fluorescent filter set.
  • the irradiation optical system 6 may include a second filter block (not shown) different from the filter block 15.
  • the second filter block is a fluorescent filter block in which an excitation filter, a dichroic mirror, and an absorption filter are integrally formed.
  • the second filter block is used when the light source device 2 emits the third excitation light (light in the third wavelength band) and the fourth excitation light (light in the fourth wavelength band). it can.
  • the dichroic mirror of the second filter block has a spectral sensitivity characteristics similar to the filter block 15 to at least the reference light lambda R.
  • a dichroic mirror of the second filter block has a predetermined transmittance (e.g.
  • the filter block 15 and the second filter block are switched by a switching unit such as a turret. Using this switching unit, one of the filter block 15 and the second filter block is disposed at a position where the light emitted from the light source device 2 enters (an optical path between the light source device 2 and the first objective lens 16). be able to.
  • the first filter 17 is a wavelength selection optical element.
  • the first filter 17 cuts a part of the wavelength region of the light from the light source device 2 and extracts the first excitation light, the second excitation light, and the reference light. It is an element.
  • the first filter 17, the wavelength band including the wavelength .lambda.1 has a wavelength band including a wavelength .lambda.2, and optical properties and the wavelength band of 100 percent transmittance of 75% including a wavelength lambda R.
  • Light in a predetermined wavelength region (first excitation light, second excitation light, and reference light) that has passed through the first filter 17 is incident on a dichroic mirror 18 that is an optical element.
  • the dichroic mirror 18 is a separation optical element that separates excitation light and fluorescence.
  • the dichroic mirror 18 is a mirror that reflects the excitation light selected by the first filter 17 and transmits the fluorescence emitted from the irradiated object 1.
  • the dichroic mirror 18 is disposed, for example, inclined by 45 degrees with respect to the optical axis.
  • the dichroic mirror 18 reflects excitation light, transmits fluorescence, partially reflects reference light, and partially transmits reflected light.
  • the dichroic mirror 18 reflects the first excitation light and the second excitation light, transmits the first fluorescence and the second fluorescence, partially reflects the reference light, and partially transmits the reflected light.
  • the dichroic mirror 18 reflects light in a wavelength band including the wavelength ⁇ 1 of the first excitation light and light in a wavelength band including the wavelength ⁇ 2 of the second excitation light, and changes the wavelength ⁇ 1 ′ of the first fluorescence.
  • the first excitation light, the second excitation light, and the reference light are reflected by the dichroic mirror 18 and guided to the first objective lens 16.
  • the first objective lens 16 is an infinite objective lens and can face the surface of the irradiated object 1 supported by the stage 4. In the present embodiment, the first objective lens 16 is disposed on the + Z side (upward) of the irradiated object 1. The first excitation light, the second excitation light, and the reference light are guided to the irradiation object 1 through the first objective lens 16. The irradiated object 1 is illuminated by the first excitation light, the second excitation light, and the reference light.
  • the second filter 19 is a wavelength selection optical element.
  • the second filter 19 selectively transmits the first fluorescence, the second fluorescence, and the reflected light.
  • the second filter 19 is an absorption filter.
  • the absorption filter may be called an emission filter or a barrier filter.
  • the imaging optical system 7 forms an image of the light beam (first fluorescence, second fluorescence, and reflected light) from the irradiated object 1 in the vicinity of its focal plane (imaging plane).
  • the imaging optical system 7 includes a second objective lens 21.
  • the microlens array 8 and the sensor 5 are arranged in that order near the focal plane of the second objective lens 21.
  • the microlens array 8 includes a plurality of microlenses arranged two-dimensionally. As an example, the microlens array 8 is disposed on the imaging plane of the second objective lens 21. Note that the microlens array 8 may be disposed on the pupil plane of the second objective lens 21. The number of lenses in the vertical direction and the number of lenses in the horizontal direction (arrangement density) of the microlens array 8 are appropriately set according to the resolution required for the image acquired by the measurement apparatus 10.
  • the sensor 5 is a light receiving unit in which light receiving elements (photodiodes) are arranged two-dimensionally.
  • the sensor 5 is an image sensor.
  • the image pickup device includes a charge coupled device (CCD) and a complementary metal oxide semiconductor (CMOS).
  • CMOS complementary metal oxide semiconductor
  • the sensor 5 may be called a photo sensor array, an image sensor, or an area sensor.
  • the sensor 5 receives light from the irradiated object 1 via the microlens array 8.
  • the sensor 5 outputs a signal corresponding to the received light amount to the control device 30.
  • FIG. 4 is a schematic diagram showing the arrangement of the microlens array 8 and the sensor 5 in which the light receiving elements are two-dimensionally arranged.
  • the sensor 5 includes a pixel array that receives light that has passed through each microlens 81 (that is, an arrangement pattern corresponding to the microlens 81).
  • a signal obtained from the light receiving element of the sensor 5 that receives light that has passed through the microlens 81 is referred to as a “pixel signal”.
  • the control device 30 performs predetermined signal processing on the set of pixel signals acquired by the sensor 5. The contents of the signal processing will be described later.
  • the control device 30 displays the signal processing result using the display device 40.
  • the display device 40 can display the image information of the irradiated object 1 acquired by the sensor 5.
  • a set of pixel signals at a plurality of different focal positions can be obtained without performing focusing by the AF function for each fluorescence.
  • the control device 30 can perform predetermined signal processing on a set of pixel signals acquired by the sensor 5 to generate a plurality of images having different focal positions.
  • FIG. 5 is an example in which a plurality of images having different focal positions are acquired using the microlens array 8 and the sensor 5. The vertical axis in FIG.
  • the control device 30 can extract a signal sequence of pixel signals having a common (that is, the same) focal length, and generate an image at the focal length.
  • FIG. 5 shows a plurality of images generated from pixel signals at the respective focal lengths Z 1 to Z 6 .
  • FIGS. 6 and 7 are diagrams for explaining pixel signal extraction processing in a configuration using the microlens array 8 and the sensor 5 in which the light receiving elements are two-dimensionally arranged.
  • each light ray incident on five pixels (a, b, c, d, e) arranged in a straight line in the sensor 5 (a principal ray passing through the center of the corresponding microlens). Only) is shown. Further, each element in each drawing is given a suffix (1, 2, 3,%) For indicating coordinates in a plane perpendicular to the optical axis.
  • L (4) (a4 + b4 + c4 + d4 + e4) It becomes.
  • Y-axis direction in addition to the X-axis direction.
  • pixel signals may be extracted in the same manner in the Y-axis direction.
  • the image signal of the coordinate X′4 in the center area can be obtained from the following equation (2).
  • FIG. 8 is a diagram for explaining the amount of chromatic aberration of each fluorescence.
  • the wavelength of the first fluorescence (first color light) is ⁇ 1 ′
  • the wavelength of the second fluorescence (second color light) is ⁇ 2 ′.
  • the amount of chromatic aberration of the first fluorescence is different from the amount of chromatic aberration of the second fluorescence.
  • the focused position of the image of the reflected light of the wavelength lambda R and Z 0. In this case, the focal point of the first fluorescence is shifted from Z 0 by ⁇ Z 1 .
  • the focal point of the second fluorescence is shifted from Z 0 by ⁇ Z 2 .
  • the following processing can be executed after acquiring a set of pixel signals at a plurality of different focal positions using the microlens array 8 and the sensor 5.
  • the control device 30 extracts the pixel signal from the light receiving element on which the light at the focal position Z 0 + ⁇ Z 1 is incident, from a set of pixel signals at a plurality of different focal positions.
  • the signal sequence of the pixel signals extracted here substantially corresponds to a signal sequence that represents a fluorescence image at the focal point of the first fluorescence.
  • the control device 30 extracts a pixel signal from the light receiving element on which the light at the focal position Z 0 + ⁇ Z 2 is incident, from a set of pixel signals at a plurality of different focal positions.
  • the signal sequence of the pixel signals extracted here substantially corresponds to a signal sequence that represents a fluorescence image at the focal point of the second fluorescence.
  • the imaging position of the reflected light (focus point) Z 0 as a reference explain the advantages of obtaining the focus and the focal point of the second fluorescence of the first fluorescent.
  • the fluorescent image may be dark or low in contrast depending on the target specimen. In such a situation, it may be difficult to obtain the signal sequence of the first fluorescent focused pixel signal and the signal sequence of the second fluorescent focused pixel signal. Therefore, it is preferable to obtain Z 0 from an image of reflected light from which sufficient contrast is obtained, and use Z 0 as a reference.
  • FIG. 9 is a diagram for explaining the components of the control device 30 that realizes the above-described processing.
  • the control device 30 includes a signal sequence extraction unit 31, a signal sequence processing unit 32, a stage control unit 33, and a chromatic aberration information storage unit 34.
  • the processing of the signal sequence extraction unit 31, the signal sequence processing unit 32, and the stage control unit 33 can be realized by a program code of software that realizes these functions.
  • the processor of the control device 30 executes processing described below in accordance with an instruction of a predetermined program stored in the memory. Note that some processes of the signal sequence extraction unit 31, the signal sequence processing unit 32, and the stage control unit 33 may be realized by hardware using electronic components such as an integrated circuit.
  • the chromatic aberration information storage unit 34 may be realized by a storage device of the control device 30.
  • the signal string extraction unit 31 receives pixel signals from each element of the sensor 5 that receives light that has passed through the microlens array 8.
  • Signal sequence extraction unit 31 from a set of pixel signals of the reflected light at a plurality of different focus, determine the reference focus position Z 0 of the reflected light.
  • Z 0 as the focal point position of the image of the reflected light
  • reference focus position Z 0 here, for example, contrast the focal position where the maximum, not exactly coincide with the focus position In some cases.
  • the reference focal position obtained here is regarded as the in-focus position. If not exactly match the focused position, as an example, the position closest to the maximum value of the contrast and Z 0.
  • the focal position where the contrast is maximized is almost the focal position.
  • the signal sequence extraction unit 31 extracts a signal sequence of pixel signals corresponding to the focal point of the first fluorescence from a set of first pixel signals at a plurality of different focal points.
  • the first set of pixel signals is a set of pixel signals acquired during a time period in which the first fluorescence will be emitted.
  • the expression “first set of pixel signals” means the same contents.
  • the signal sequence extraction unit 31 extracts the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence using the chromatic aberration amount ⁇ Z 1 of the first fluorescence from the reference focal position Z 0 .
  • the signal sequence of the first fluorescence pixel signal extracted here substantially corresponds to a signal sequence representing a fluorescence image at the focal point of the first fluorescence.
  • the signal sequence extraction unit 31 extracts a signal sequence of pixel signals corresponding to the focal point of the second fluorescence from a set of second pixel signals at a plurality of different focal points.
  • the second set of pixel signals is a set of pixel signals acquired during a time period in which the second fluorescence will be emitted.
  • the expression “second set of pixel signals” means the same content.
  • the signal string extracting unit 31 using the chromatic aberration amount [Delta] Z 2 of the second fluorescence from the reference focus position Z 0, and extracts the signal sequence of the pixel signals corresponding to the focal point of the second fluorescence.
  • the signal sequence of the second fluorescence pixel signal extracted here substantially corresponds to a signal sequence representing a fluorescence image at the focal point of the second fluorescence.
  • the signal sequence processing unit 32 performs the first signal processing on the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence, and the signal sequence of the pixel signal corresponding to the focal point of the second fluorescence.
  • the second signal processing is executed.
  • the first signal processing is processing for generating a first image from a signal sequence of pixel signals corresponding to the focal point of the first fluorescence.
  • the second signal processing is processing for generating a second image from a signal sequence of pixel signals corresponding to the focal point of the second fluorescence.
  • the control device 30 may output the first image and the second image to the display device 40.
  • the first signal processing is a process of outputting information representing the signal value of the first fluorescence pixel signal and position information of the irradiated object 1 corresponding to the first fluorescence pixel signal.
  • the information indicating the signal value of the pixel signal is a luminance value.
  • the position information of the irradiated object 1 is a spot address.
  • the signal sequence processing unit 32 extracts a pixel signal having a luminance value exceeding a predetermined threshold value. When the luminance value exceeds a predetermined threshold value, it can be considered that the first fluorescence is generated.
  • the signal sequence processing unit 32 associates the luminance value with the address of the spot corresponding to the luminance value in the irradiated object 1.
  • the signal sequence processing unit 32 may output the associated result to the display device 40. Thereby, it can be determined in which position the irradiated body 1 emits fluorescence.
  • the second signal processing may be processing for outputting information representing the signal value of the second fluorescence pixel signal and position information of the irradiated object 1 corresponding to the second fluorescence pixel signal. .
  • the stage control unit 33 determines whether the signal value of any pixel signal in the set of reflected pixel signals is greater than a predetermined threshold value.
  • the signal value used for determination is a luminance value.
  • the threshold value may be preset based on the depth of focus or the noise level.
  • the stage control unit 33 may calculate contrast from a set of pixel signals of reflected light and determine whether the contrast value is greater than a predetermined threshold value. Note that the above determination may be performed using the pixel signals of the first pixel signal set and the second pixel signal set.
  • One of the advantages of the measurement apparatus 10 described here is that when performing measurement on the same irradiated object 1, when the pixel signal is acquired by irradiating the first excitation light, the second excitation light is obtained. It is not necessary to drive the optical system of the measuring apparatus 10 (for example, a lens for focusing (focusing lens)) for focusing between the time when the pixel signal is acquired by irradiating the lens.
  • One of the advantages of the measuring apparatus 10 is that when performing measurement on the same irradiated object 1, a pixel signal is obtained by irradiating the first excitation light, and a pixel by irradiating the second excitation light. It is not necessary to drive the stage 4 for focusing between the time when the signal is acquired.
  • the height of the stage 4 is adjusted once (in this case, it is not always necessary to focus), and when the pixel signal is acquired by irradiating the first excitation light There is no need to drive the lens or stage 4 of the optical system for focus adjustment between the time when the pixel signal is acquired by irradiating the second excitation light.
  • the stage control unit 33 extracts an arbitrary pixel signal from a set of pixel signals of reflected light, first fluorescence, and second fluorescence, and a predetermined threshold value is included in these pixel signals. It may be determined whether there is a signal having a larger luminance value.
  • the stage control unit 33 may change or adjust the height direction (Z direction) of the stage 4 when there is no signal having a luminance value larger than a predetermined threshold value.
  • the stage control unit 33 extracts a predetermined number of pixel signals from the set of reflected pixel signals, and determines whether or not a luminance value larger than a predetermined threshold exists in the pixel signals. You may judge. By preferentially handling a set of pixel signals of reflected light that is highly likely to obtain sufficient contrast, it is possible to determine whether the focal position is greatly deviated.
  • the chromatic aberration information storage unit 34 includes information on the amount of chromatic aberration of each fluorescence (hereinafter referred to as “chromatic aberration information”). Accordingly, it is possible to acquire a signal sequence at the focal point position of each fluorescence in accordance with the amount of chromatic aberration of each fluorescence.
  • the chromatic aberration information includes at least a chromatic aberration amount ⁇ Z 1 of the first fluorescence from the focal position of the reflected light and a chromatic aberration amount ⁇ Z 2 of the second fluorescence from the focal position of the reflected light.
  • the chromatic aberration information is information in which information on the wavelength of the excitation light is associated with information on the amount of chromatic aberration of fluorescence generated from the excitation light.
  • the control device 30 controls the light source device 2 to emit arbitrary excitation light, the control device 30 can acquire information on the amount of chromatic aberration of fluorescence corresponding to the excitation light.
  • the chromatic aberration information may include information indicating the relationship of the chromatic aberration amount between the first fluorescence and the second fluorescence. Using this relationship, the control device 30 may obtain the relationship between the in-focus position of the first fluorescence and the in-focus position of the second fluorescence. Thus, if one of the in-focus position of the first fluorescence and the in-focus position of the second fluorescence is known without obtaining the reference point position of the reflected light, the other in-focus position can be obtained.
  • the chromatic aberration information in the chromatic aberration information storage unit 34 can be set in advance based on the design of the optical system (lens or the like) 3.
  • the chromatic aberration amount of each fluorescence may be measured at the time of the first measurement of the measuring apparatus 10 and the measurement result may be stored in the chromatic aberration information storage unit 34 as chromatic aberration information.
  • FIG. 10 is an example of a flowchart when measuring the irradiated object 1 on the support member 60 by the measuring apparatus 10.
  • the support member 60 is prepared (1001).
  • a support member 60 for example, a plate
  • a set of pixel signals at multiple focal points is acquired (1002).
  • signal processing is performed on the set of pixel signals at multiple focal points (1003).
  • FIG. 11 is an example of detailed contents of the flowchart of FIG. In the following, the processing in which the control device 30 and each component of the control device 30 are the subject may be described with the processor as the subject.
  • the irradiated object 1 (for example, one irradiated object among a plurality of irradiated objects 1 supported by the support member 60) is the first excitation light, the second excitation light, and the reference light. It is irradiated with three lights.
  • the light source device 2 emits first excitation light, reference light, and second excitation light in the order of first excitation light, reference light, and second excitation light.
  • the period in which the light source device 2 emits the first excitation light, the period in which the light source device 2 emits the reference light, and the period in which the light source device 2 emits the second excitation light do not overlap. .
  • the irradiated object 1 is irradiated in the order of the first excitation light, the reference light, and the second excitation light.
  • the period in which the light source device 2 emits the reference light is between the period in which the light source device 2 emits the first excitation light and the period in which the light source device 2 emits the second excitation light.
  • the reason why the reference light is irradiated in this manner between the first excitation light and the second excitation light is as follows. It is conceivable that the fluorescence information acquisition time (the charge accumulation time of the sensor 5 with respect to the fluorescence emitted from the irradiated object 1) is longer than the information acquisition time of the reflected light (the charge accumulation time of the sensor 5 with respect to the reflected light).
  • the second fluorescence information is obtained after obtaining the reflected light information (a set of pixel signals).
  • the time to get is longer.
  • Z direction deviation causes a deviation of the reference focus position Z 0 as a reference.
  • the irradiation order of the reference beam is preferably close to each of the first excitation light and second excitation light.
  • the emission order of the first excitation light, the second excitation light, and the reference light is not limited to this, and may be any order.
  • Step 1002 in FIG. 10 includes steps 1101 to 1108.
  • Step 1003 in FIG. 10 includes steps 1109 to 1112.
  • the support member 60 is prepared (1001). As an example, the support member 60 is disposed on the stage 4.
  • the light source device 2 irradiates the irradiated body 1 with the first excitation light based on the signal from the control device 30 (1101).
  • the signal sequence extraction unit 31 acquires a set of first pixel signals at a plurality of different focal points from the sensor 5 (1102).
  • a set of pixel signals during a time period in which the first fluorescence is emitted from the irradiated body 1 by irradiating the irradiated body 1 with the first excitation light is acquired.
  • the set of first pixel signals acquired here may or may not include the first fluorescence information.
  • each light receiving element of the sensor 5 accumulates charge between when the charge accumulation start instruction is given and when the charge accumulation end instruction is given (that is, the sensor according to the charge accumulation start instruction 5 each of the light receiving elements 5 starts charge accumulation, and each light receiving element of the sensor 5 finishes charge accumulation in response to a charge accumulation end instruction), and the sensor 5 that has performed the accumulation operation (after the accumulation operation)
  • the light source device 2 irradiates the irradiated object 1 with reference light based on a signal from the control device 30 (1103).
  • the irradiated object 1 irradiated with the reference light is the same as the irradiated object 1 irradiated with the first excitation light in Step 1101.
  • the signal string extraction unit 31 of the control device 30 acquires a set of pixel signals of reflected light at a plurality of different focal points from the sensor 5 (1104).
  • a set of pixel signals of reflected light is stored in each light receiving element of the sensor 5 from when the charge accumulation start instruction is issued until the charge accumulation end instruction is issued (that is, the sensor is activated according to the charge accumulation start instruction).
  • each of the light receiving elements 5 starts charge accumulation, and each light receiving element of the sensor 5 finishes charge accumulation in response to a charge accumulation end instruction), and the sensor 5 that has performed the accumulation operation (after the accumulation operation)
  • the imaging operation for obtaining the first set of pixel signals and the imaging operation for obtaining the set of reflected light pixel signals are performed individually. The period during which the imaging operation for obtaining the set of reflected pixel signals does not overlap with the period during which the imaging operation for obtaining the first set of pixel signals is performed.
  • the light source device 2 irradiates the irradiated body 1 with the second excitation light based on the signal from the control device 30 (1105).
  • the signal string extraction unit 31 acquires a set of second pixel signals at a plurality of different focal points from the sensor 5 (1106).
  • a set of pixel signals during a time period in which the second fluorescence is emitted after irradiation of the second excitation light to the irradiated object 1 is acquired.
  • the set of second pixel signals acquired here may or may not include the second fluorescence information.
  • each light receiving element of the sensor 5 accumulates charge between when the charge accumulation start instruction is given and when the charge accumulation end instruction is given (that is, the sensor according to the charge accumulation start instruction).
  • each of the light receiving elements 5 starts charge accumulation, and each light receiving element of the sensor 5 finishes charge accumulation in response to a charge accumulation end instruction), and the sensor 5 that has performed the accumulation operation (after the accumulation operation)
  • This is a set of pixel signals obtained by performing an imaging operation including a reading operation of reading (outputting) charges from each light receiving element once.
  • An imaging operation for obtaining a first set of pixel signals, an imaging operation for obtaining a set of reflected pixel signals, and an imaging operation for obtaining a second pixel signal are performed separately.
  • the period during which the imaging operation for obtaining the second set of pixel signals is performed is the period during which the imaging operation for obtaining the first set of pixel signals is performed and the imaging operation for obtaining the set of reflected pixel signals. Does not overlap with any period during which
  • the stage control unit 33 extracts an arbitrary pixel signal from the set of reflected pixel signals, and determines whether any of the pixel signals has a signal value larger than a predetermined threshold value ( 1107).
  • the signal value is a luminance value.
  • the threshold value may be preset based on the depth of focus or the noise level.
  • the stage control unit 33 may calculate contrast from a set of pixel signals of reflected light and determine whether the contrast value is greater than a predetermined threshold value.
  • the stage control unit 33 extracts an arbitrary pixel signal from a set of pixel signals of the first fluorescence or the second fluorescence instead of the reflected light, and a predetermined threshold value is included in these pixel signals.
  • the luminance value of the first fluorescence or the second fluorescence can be compared with a predetermined threshold value.
  • the process proceeds to step 1109. If the condition is not satisfied, it is determined that the focal position is significantly shifted, and the process proceeds to step 1108.
  • the stage control unit 33 drives the stage 4 (1108).
  • the stage control unit 33 changes or adjusts the height (Z direction) of the stage 4.
  • the stage control unit 33 changes or adjusts the height of the stage 4 by a predetermined height.
  • the stage control unit 33 changes or adjusts the amount of chromatic aberration [Delta] Z 1 or chromatic aberration [Delta] Z height of only 2 minutes Stage 4 of the second fluorescence of the first fluorescent. After changing or adjusting the height of the stage 4, the process returns to Step 1101.
  • Signal string extraction unit 31 determines a set of pixel signals of the reflected light having a set of a plurality of different focus each pixel signal, a reference focus position Z 0 (1109).
  • Figure 12 is an example of a method for determining the reference focus position Z 0.
  • the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions.
  • the signal sequence extraction unit 31 performs the following processing using information of a plurality of images. Note that it is not always necessary to treat the image as an image, and the processing described below may be performed as a set of pixel signals.
  • the signal sequence extraction unit 31 divides an image of each focal position Z ⁇ 3 to Z 3 into a plurality of regions. As an example, the signal sequence extraction unit 31 calculates the contrast of a region corresponding to the same position in the images at the respective focal positions Z ⁇ 3 to Z 3 . As an example, the signal sequence extraction unit calculates the contrast of the central region of the images at the focal positions Z ⁇ 3 to Z 3 (shaded portion in FIG. 12). The area for calculating the contrast may be another area or all areas on the image.
  • FIG. 13 is an example of the result of calculating the contrast from the set of pixel signals of reflected light, and shows the contrast of the central region of the image at each focal position Z ⁇ 3 to Z 3 .
  • the signal sequence extraction unit 31 determines a position where the contrast is maximum from the focal positions Z ⁇ 3 to Z 3 as the reference focal position Z 0 .
  • the maximum value of the approximate curve in FIG. 13 is located between two Z positions, the closer one may be determined as the reference focal position Z 0 .
  • the signal string extraction unit 31 acquires information on the first fluorescence chromatic aberration amount ⁇ Z 1 from the chromatic aberration information storage unit 34.
  • the signal sequence extraction unit 31 extracts a signal sequence of a pixel signal corresponding to the focal point of the first fluorescence using the chromatic aberration amount ⁇ Z 1 of the first fluorescence from the reference focal position Z 0 (1110).
  • FIG. 14 is an example of acquiring a signal sequence of pixel signals corresponding to the focal point of the first fluorescence from a set of first pixel signals at a plurality of different focal points Z ⁇ 3 to Z 3 .
  • the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions.
  • the signal sequence extraction unit 31 extracts an image at the in-focus position of the first fluorescence from the images at the plurality of focus positions Z ⁇ 3 to Z 3 .
  • the signal sequence extraction unit 31 uses the first fluorescence chromatic aberration amount ⁇ Z 1 from the reference focal position Z 0 to extract an image of the focal position Z 2 corresponding to the focal point of the first fluorescence. It is not necessarily treated as an image, signal sequence extraction unit 31, from the set of pixel signals may extract a signal sequence corresponding to the focus position Z 2.
  • the signal sequence extraction unit 31 acquires information on the second fluorescence chromatic aberration amount ⁇ Z 2 from the chromatic aberration information storage unit 34.
  • Signal string extracting unit 31 using the chromatic aberration amount [Delta] Z 2 of the second fluorescence from the reference focus position Z 0, and extracts the signal sequence of the pixel signals corresponding to the focal point of the second fluorescent (1111).
  • FIG. 15 is an example of acquiring a signal sequence of pixel signals corresponding to the focal point of the second fluorescence from a set of second pixel signals at a plurality of different focal points Z ⁇ 3 to Z 3 .
  • the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions.
  • the signal sequence extraction unit 31 extracts an image at the focal point position of the second fluorescence from the images of the plurality of focal positions Z ⁇ 3 to Z 3 .
  • the signal sequence extraction unit 31 uses the second fluorescence chromatic aberration amount ⁇ Z 2 from the reference focal position Z 0 to extract an image at the focal position Z ⁇ 2 corresponding to the focal point of the first fluorescence. .
  • the signal sequence extraction unit 31 may extract the signal sequence corresponding to the focal position Z- 2 from the set of pixel signals.
  • the reference focal position Z 0 + the chromatic aberration amount ⁇ Z 1 or ⁇ Z 2 may not coincide with a plurality of different focal points Z ⁇ 3 to Z 3 .
  • Z 0 + ⁇ Z 1 was Z 2.4 .
  • Z 2.4 is close found the following Z 2 as compared to Z 3, may extract a signal string corresponding to the closer Z 2.
  • the signal sequence extraction unit 31 may generate a signal value corresponding to Z 2.4 by multiplying each of the signal value of the Z 2 pixel signal and the signal value of the Z 3 pixel signal by a weighting factor. .
  • a weighted average may be used. Weighting factor applied to the signal value of the pixel signals of the signal value and Z 3 of the pixel signals of Z 2 may be set to the same, it is set differently according to the distance from the reference focus position Z 0 Good.
  • the focal point position of the reflected light is between the focal point position of the first fluorescence and the second focal point position.
  • the relationship between the fluorescence and the second fluorescence is not limited to this example.
  • the focal point position of the reflected light may be on the positive direction side (+ Z side) as compared with the first fluorescence.
  • the focal position of the reflected light may be on the negative direction side (+ Z side) as compared with the second fluorescence.
  • the chromatic aberration information in the chromatic aberration information storage unit 34 only needs to include information for specifying the relationship between the reflected light, the first fluorescence, and the second fluorescence.
  • the signal sequence processing unit 32 performs first signal processing on the signal sequence of the pixel signal corresponding to the extracted first fluorescence focal point, and extracts the extracted second fluorescence focal point.
  • Second signal processing is executed for the signal sequence of pixel signals corresponding to (1112).
  • the signal sequence processing unit 32 generates a first result from the signal sequence of the pixel signal corresponding to the extracted focal point of the first fluorescence, and the pixel sequence of the pixel signal corresponding to the extracted focal point of the second fluorescence.
  • a second result is generated from the signal sequence.
  • the first result is an image (see FIG. 14) corresponding to the focal point of the first fluorescence.
  • the second result is an image (see FIG. 15) corresponding to the focal point of the second fluorescence.
  • the signal sequence processing unit 32 uses the extracted image corresponding to the focal point of the first fluorescence (see FIG. 14). The image may be output to the display device 40 as it is, and the extracted image (see FIG. 15) corresponding to the focal point of the second fluorescence may be output to the display device 40 as it is.
  • the signal sequence processing unit 32 performs the first processing from the extracted signal sequence of the pixel signals corresponding to the focal point of the first fluorescence.
  • An image may be generated, and a second image may be generated from a signal sequence of pixel signals corresponding to the extracted focal point of the second fluorescence.
  • the first result includes information indicating the signal value of the first fluorescence pixel signal with respect to the signal sequence of the pixel signal corresponding to the focal point of the extracted first fluorescence, and the first result. And position information of the irradiated object 1 corresponding to the fluorescent pixel signal.
  • the second result is that the information indicating the signal value of the second fluorescence pixel signal and the pixel signal of the second fluorescence are related to the signal sequence of the pixel signal corresponding to the focal point of the extracted second fluorescence. Corresponding position information of the irradiated object 1.
  • the information representing the signal value is a luminance value.
  • the position information of the irradiated object 1 is a spot address. It can be said that the 1st result and the 2nd result have shown the result of having made the to-be-irradiated body 1 and the sample react. It can be said that the assayed result includes the first result and the second result.
  • the first result and the second result are stored in the storage device.
  • the storage device may be a storage device inside the measurement device 10 (for example, inside the control device 30) or a storage device (external device, external device) outside the measurement device 10. Examples of the storage device outside the measuring apparatus 10 include a server, a printer, and a portable information terminal.
  • the first result and the second result generated by the signal sequence processing unit 32 are output to the external device by the control device 30.
  • Communication between the measuring apparatus 10 and the external device may be wired communication or wireless communication.
  • the first result and the second result output by the control device 30 are stored in the external device.
  • the external device is a printer
  • the first result and the second result may be printed on paper (for example, automatically) and output.
  • the external device is a portable information terminal
  • the first result and the second result may be displayed on the display unit of the portable information terminal (for example, automatically).
  • the reference light irradiation period may be set before and after the first excitation light irradiation period, and the reference light irradiation period may be set before and after the second excitation light irradiation period.
  • steps 1103 and 1104 may be added before step 1101, and steps 1103 and 1104 may be added after step 1106.
  • a set of signals (first set) at multiple focal points of the reflected light is acquired before the irradiation of the first excitation light, and then after the irradiation of the first excitation light and the second Before the excitation light irradiation, a set of signals (second set) at the multifocal point of the reflected light is obtained, and after the second excitation light irradiation, the pixel signals of the reflected light at the plurality of focus positions are obtained.
  • a set (third set) is acquired.
  • the signal sequence extraction unit 31 obtains the reference focal position Z 0A obtained from the first set, the reference focal position Z 0B obtained from the second set, and the reference focal position Z 0C obtained from the third set. May be compared. As an example, the signal sequence extraction unit 31 may determine whether the reference focal positions Z 0A to Z 0C are all the same value. When the reference focal positions Z 0A to Z 0C are different, the signal sequence extraction unit 31 may output which of the reference focal positions Z 0A to Z 0C is different. The user can grasp at which point the position shift in the Z direction has occurred.
  • ⁇ Z 1 ⁇ Z 2 .
  • FIG. 16 and FIG. 17 are used to explain another flowchart for measuring the irradiated object 1.
  • Step 1002 in FIGS. 16 and 17 is composed of steps 1101 to 1108 in FIG.
  • Step 1003 in FIGS. 16 and 17 is composed of steps 1109 to 1112 in FIG.
  • a support member 60 for example, a plate
  • a set of signals (first set) of the object 1 to be measured is acquired (1002).
  • movement to the next irradiated body 1 (1004) and acquisition of a set of signals of the next irradiated body 1 (second set) (1002) is performed.
  • the operation of moving the measurement object to the next irradiated object 1 is performed by driving the stage 4.
  • the movement to the next irradiated object 1 is performed after the signal processing (1003).
  • FIG. 11 the movement to the next irradiated object 1 is performed after the signal processing (1003).
  • the signal processing (1003) and the movement to the next irradiated object 1 (1004) are performed.
  • acquisition (1002) of the next set of signals (second set) of the irradiated object 1 are executed in parallel. Thereby, the throughput of image acquisition is improved, and more information on the irradiated object 1 can be acquired in a short time.
  • a support member 60 for example, a plate
  • a set of signals of a certain irradiated body 1 is acquired (1002), and then moved to the next irradiated body 1 (1004).
  • the operation of moving the measurement object to the next irradiated object 1 is performed by driving the stage 4.
  • the combination of step 1002 and step 1004 is repeated up to the last irradiated object 1 on the support member 60.
  • signal processing is collectively performed on the set of signals of all the irradiated objects 1 (1003). In this flow, every time a set of signals is acquired (1002), signal processing (1003) is not performed, and information of all the irradiated objects 1 is acquired in advance.
  • step 1003 is advantageous when step 1003 is performed by another control device.
  • Information on all the irradiated objects 1 acquired in advance is transferred to another control device via a recording medium or a network.
  • Another control device may collectively perform signal processing (1003) on a set of signals of all the irradiated objects 1.
  • the signal sequence corresponding to the focal point of the first fluorescence corresponding to the first excitation light and the second excitation light can be handled without focusing using the AF function.
  • a signal sequence corresponding to the focal point of the second fluorescence can be obtained.
  • two fluorescences having different wavelengths have different amounts of chromatic aberration, so that a focusing process using an AF function is required every time an image is acquired.
  • an operation such as driving the lens of the optical system or changing the height of the stage is required each time an image of each fluorescence is acquired by the AF function. Therefore, the time required for driving the lens and changing the height of the stage has had a great influence on the throughput of image acquisition.
  • the AF function may cause an error.
  • the AF function is used between when the pixel signal is acquired by irradiating the first excitation light and when the pixel signal is acquired by irradiating the second excitation light. It does not have to be focused.
  • an optical system lens for focusing between when the pixel signal is acquired by irradiating the first excitation light and when the pixel signal is acquired by irradiating the second excitation light. Processing such as driving the focusing lens or changing the height of the stage is not necessary. In the example of FIG.
  • steps 1101 to 1106 an operation such as driving a lens (focusing lens) of the optical system or changing the height of the stage is not necessary.
  • focusing by the AF function is unnecessary during steps 1101 to 1106, and a signal sequence corresponding to the focal point of the first fluorescence from the set of acquired pixel signals at a plurality of focal positions, and the first A signal sequence corresponding to the focal point of the two fluorescences can be extracted. Therefore, the throughput of image acquisition is greatly improved, and more images of the irradiated object 1 can be acquired in a short time.
  • FIG. 18 is an example of a flowchart for measuring the irradiated object 1 by the measuring apparatus 10.
  • the light source device 2 emits reference light, first excitation light, and second excitation light simultaneously.
  • the support member 60 is placed on the stage 4 (1801).
  • the light source device 2 emits both the reference light, the first excitation light, and the second excitation light based on the signal from the control device 30 (1802).
  • the reference light irradiation period, the first excitation light irradiation period, and the second excitation light irradiation period may overlap.
  • the reference light irradiation period, the first excitation light irradiation period, and the second excitation light irradiation period may at least partially overlap each other.
  • irradiation with reference light, irradiation with first excitation light, and irradiation with second excitation light are started simultaneously.
  • the signal string extraction unit 31 of the control device 30 acquires a set of pixel signals at a plurality of different focal points from the sensor 5 (1803).
  • the set of pixel signals at a plurality of different focal points here includes a set of reflected light pixel signals, a set of first pixel signals, and a set of second pixel signals.
  • the set of pixel signals at a plurality of different focal points is one. That is, by performing one imaging operation on the same subject 1, a set of reflected pixel signals, a set of first pixel signals, and a set of second pixel signals are obtained.
  • the reference light, the first excitation light, and the second excitation light are irradiated at different times, three sets of pixel signals at a plurality of different focal points are acquired.
  • three imaging operations are performed on the same irradiated object 1 in order to obtain a set of pixel signals of reflected light, a set of first pixel signals, and a set of second pixel signals. Do. That is, for the same subject 1, one imaging operation is performed to obtain a set of reflected light pixel signals, one imaging operation is performed to obtain a first pixel signal set, and second In order to obtain a set of pixel signals, one imaging operation is performed.
  • the stage control unit 33 extracts an arbitrary pixel signal from a set of pixel signals at a plurality of different focal points, and determines whether any of the pixel signals has a signal value larger than a predetermined threshold value. (1804). As an example, the signal value is a luminance value. If the condition is satisfied, the process proceeds to step 1806. If the condition is not satisfied, the process proceeds to step 1805. In this case, the stage control unit 33 drives the stage 4. As an example, the stage control unit 33 changes or adjusts the height (Z direction) of the stage 4 (1805). The content of step 1805 is the same as that of 1108 in FIG.
  • the signal sequence extraction unit 31 determines the reference focal position Z 0 from a set of pixel signals at a plurality of different focal points (1806). As an example, the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions. The signal sequence extraction unit 31 performs the following processing using information of a plurality of images. Note that it is not always necessary to treat the image as an image, and the processing described below may be performed as a set of pixel signals.
  • the signal sequence extraction unit 31 divides the image at each focal position into a plurality of regions. As an example, the signal sequence extraction unit 31 calculates the contrast of the region corresponding to the same position in the image at each focal position. As an example, the signal sequence extraction unit 31 calculates the contrast of the central region of the image at each focal position.
  • the area for calculating the contrast may be another area or all areas on the image.
  • Figure 19 is an example of a method for determining the reference focus position Z 0.
  • FIG. 19 shows the relationship between the focal position and contrast. If the reflected light, the first fluorescence, and the second fluorescence are not substantially affected by light of other wavelengths at the respective in-focus positions, a contrast graph is obtained as shown by the approximate curve in FIG. Has three peaks.
  • the signal sequence extraction unit 31 extracts three focal positions corresponding to three peaks from a graph representing the relationship between the focal position and the contrast.
  • a known method can be used for peak extraction.
  • the peak may be extracted using information of the first derivative or second derivative of the approximate curve representing the relationship between the focal position and the contrast.
  • the chromatic aberration information stored in the chromatic aberration information storage unit 34 includes the chromatic aberration amount ⁇ Z 1 of the first fluorescence from the focal position of the reflected light and the chromatic aberration of the second fluorescence from the focal position of the reflected light. and a quantity [Delta] Z 2 at least.
  • the signal sequence extraction unit 31 refers to the chromatic aberration information so that (i) the in-focus position of the first fluorescence is on the positive direction side (+ Z side) with respect to the in-focus position of the reflected light.
  • the signal string extraction unit 31 determines that the central peak among the three peaks in FIG. 19 is the focused position of the reflected light (that is, the reference focal position Z 0 ) by referring to the chromatic aberration information.
  • the signal string extraction unit 31 acquires information on the first fluorescence chromatic aberration amount ⁇ Z 1 from the chromatic aberration information storage unit 34.
  • the signal sequence extraction unit 31 extracts the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence by using the chromatic aberration amount ⁇ Z 1 of the first fluorescence from the reference focal position Z 0 (1807).
  • the signal sequence extraction unit 31 acquires information on the second fluorescence chromatic aberration amount ⁇ Z 2 from the chromatic aberration information storage unit 34.
  • Signal string extracting unit 31 using the chromatic aberration amount [Delta] Z 2 of the second fluorescence from the reference focus position Z 0, and extracts the signal sequence of the pixel signals corresponding to the focal point of the second fluorescent (1808).
  • the reference focal position Z 0 + chromatic aberration amount ⁇ Z 1 or ⁇ Z 2 may not coincide with a plurality of different focal points Z ⁇ 5 to Z 5 .
  • Z 0 + ⁇ Z 1 was Z 4.4 .
  • Z 4.4 is close found the following Z 4 as compared to Z 5, may extract a signal string corresponding to the closer Z 4.
  • the signal sequence extraction unit 31, by multiplying a weighting factor to each of the signal value of the signal value of the pixel signal and the pixel signal of the Z 5 of Z4, may generate a signal value corresponding to Z 4.4.
  • a weighted average may be used.
  • Weighting factor applied to the signal value of the pixel signal of the pixel signal the signal values and Z 5 of Z 4 may be set to the same, it is set differently according to the distance from the reference focus position Z 0 Good.
  • the signal sequence processing unit 32 performs first signal processing on the signal sequence of the extracted first fluorescence pixel signal, and performs the extraction on the signal sequence of the extracted second fluorescence pixel signal.
  • the second signal processing is then executed (1809). Note that the content of step 1809 is the same as that of step 1112 in FIG.
  • the focal point of the first fluorescence corresponding to the first excitation light even when the light source device 2 emits both the reference light, the first excitation light, and the second excitation light, the focal point of the first fluorescence corresponding to the first excitation light. And a signal sequence corresponding to the focal point of the second fluorescence corresponding to the second excitation light can be obtained. For example, it may take several seconds to irradiate the excitation light and obtain sufficient information on the corresponding fluorescence.
  • the reference light, the first excitation light, and the second excitation light are emitted together, the first excitation light and the second excitation light are sequentially irradiated to obtain the first fluorescence information and the first excitation light.
  • the time for acquiring the first fluorescence information and the second fluorescence information can be shortened compared to the case of acquiring the second fluorescence information.
  • FIG. 21 is another example of the configuration of the measuring apparatus according to the second embodiment.
  • the measurement apparatus 10 may further include a separation unit 22 that separates the reflected light, the first fluorescence, and the second fluorescence between the irradiation target 1 and the second objective lens 21.
  • the separation unit may include a separation optical element (for example, a dichroic mirror) that separates reflected light and fluorescence.
  • the separation unit may include a bandpass filter (wavelength selection unit) that separates the first fluorescence and the second fluorescence.
  • the measuring apparatus 10 includes a third objective lens 23, a second microlens array 24, and a second sensor 25.
  • the second microlens array 24 and the second sensor 25 are arranged in this order in the vicinity of the focal plane of the third objective lens 23.
  • the configuration of the second microlens array 24 is the same as that of the microlens array 8.
  • the configuration of the second sensor 25 is the same as that of the sensor 5.
  • the separation unit 22 is configured to guide the reflected light and the first fluorescence to the sensor 5 side and guide the reflected light and the second fluorescence to the second sensor 25.
  • the signal string extraction unit 31 can calculate the reference focal position Z 0 from the set of pixel signals of the multifocal reflected light acquired by the sensors 5 and 25, respectively. For some reason, there may be a difference between the optical path length to the sensor 5 side and the optical path length to the second sensor 25 side. Even if the deviation in the optical path amounts occur, obtained by the use of the reference focus position Z 0 obtained from the information of the received reflected light in each sensor 5,25, without the influence of the deviation of the optical path length, the information of each fluorescent can do.
  • ⁇ Z 1 ⁇ Z 2 .
  • the measurement apparatus 10 may include a second separation unit (for example, a dichroic mirror) for separating the third fluorescence and the fourth fluorescence. Good.
  • a second separation unit for example, a dichroic mirror
  • the signal sequence at the focal point of the first fluorescence, and A signal string at the focal point of the second fluorescence can be acquired.
  • FIG. 22 is an example of a flowchart for measuring the irradiated object 1 by the measuring apparatus 10.
  • the irradiated object 1 (for example, one irradiated object among a plurality of irradiated objects 1 supported by the support member 60) is the first excitation light, the second excitation light, and the reference light. It is irradiated with three lights.
  • the light source device 2 emits the first excitation light, the reference light, and the second excitation light in the order of the first excitation light, the reference light, and the second excitation light.
  • FIG. 22 the irradiated object 1 (for example, one irradiated object among a plurality of irradiated objects 1 supported by the support member 60) is the first excitation light, the second excitation light, and the reference light. It is irradiated with three lights.
  • the light source device 2 emits the first excitation light, the reference light, and the second excitation light in the order of the first excitation light, the reference light
  • the period in which the light source device 2 emits the first excitation light, the period in which the light source device 2 emits the reference light, and the period in which the light source device 2 emits the second excitation light do not overlap. . Therefore, the irradiated object 1 is irradiated in the order of the first excitation light, the reference light, and the second excitation light.
  • the period in which the light source device 2 emits the reference light is between the period in which the light source device 2 emits the first excitation light and the period in which the light source device 2 emits the second excitation light. Note that the emission order of the first excitation light, the second excitation light, and the reference light is not limited to this, and may be any order.
  • Steps 2201 to 2209 in FIG. 22 include the same contents as 1001 and 1101 to 1108 in FIG.
  • the signal sequence extraction unit 31 determines a reference focal position from a set of pixel signals of reflected light at a plurality of different focal points (2210). As an example, the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions. The signal sequence extraction unit 31 performs the following processing using information of a plurality of images. Note that it is not always necessary to treat the image as an image, and the following processing may be performed with a set of pixel signals.
  • the signal sequence extraction unit 31 divides the image at each focal position into a plurality of regions.
  • FIG. 23 shows an example in which the images at the respective focal positions Z ⁇ 3 to Z 3 are divided into a plurality of regions.
  • the signal sequence extraction unit 31 may obtain the contrast at each focal position for all of the plurality of areas and determine the reference focal position for each area.
  • the signal string extraction unit 31 calculates the contrast of the region corresponding to the same position in the images at the respective focal positions Z ⁇ 3 to Z 3 .
  • the signal sequence extraction unit 31 determines a position where the contrast is maximum from among the focus positions Z ⁇ 3 to Z 3 as the reference focus position.
  • the signal sequence extraction unit 31 calculates the contrast for each region, and determines the reference focal position for each region.
  • FIG. 24 is an example of the inclination of the irradiated object 1.
  • the in-focus position is different between a region near the right end of the irradiated body 1 (a region including the first vertex 1a) and a region near the left end of the irradiated body 1 (a region including the third vertex 1c). . Therefore, the signal sequence extraction unit 31 determines the reference focal position for each region.
  • the signal sequence extraction unit 31 may obtain the contrast at each focal position with respect to at least three regions among a plurality of regions. This is because the inclination of the plane can be specified if at least three points are determined. As an example, the signal sequence extraction unit 31 may determine, for each region, the focal position where the contrast is maximum as the reference focal position.
  • the stage control unit 33 determines whether the signal values of the pixel signals of all the calculated reference focal positions (three reference focal positions) are larger than a predetermined threshold (2211).
  • the signal value is a luminance value.
  • the stage control unit 33 determines whether all the luminance values of the three reference focal positions exceed a predetermined threshold value.
  • the support member 60 may be regarded as being inclined at a certain level or more. If the inclination of the support member 60 is greater than or equal to a certain level, the reference focal position for each region differs greatly, and as a result, the signal sequence corresponding to the first fluorescent focal point and the second fluorescent focal point correspond to each other.
  • the process proceeds to step 2209.
  • the stage control unit 33 may drive the stage 4 and adjust the tilt of the stage 4 ( 2209). If the above condition is satisfied, the process proceeds to step 2212.
  • the stage control unit 33 may display an error on the display device 40 when the signal value of the pixel signal at the reference focal position is equal to or less than a predetermined threshold value.
  • the stage controller 33 may record the content of the error as data in the storage device when the signal value of the pixel signal at the reference focal position is equal to or less than a predetermined threshold value.
  • the stage control unit 33 calculates the contrast of the region corresponding to the calculated three reference focal positions, and determines whether all of the contrast values are larger than a predetermined threshold value. May be. If even one of the contrasts does not exceed the threshold value, the support member 60 may be regarded as being inclined more than a certain level.
  • the signal sequence extraction unit 31 determines the reference focal position with respect to three areas out of a plurality of areas, and determines the inclination of the irradiated object 1 from the reference focal position for each area. Good. Since various dimensions (vertical dimension, horizontal dimension, distance between spots, etc.) of the irradiation object 1 are known in advance, the signal string extraction unit 31 includes information on three reference focal positions and various dimensions of the irradiation object 1. May be used to calculate the inclination of the irradiated object 1. As an example, the stage control unit 33 may determine whether the calculated inclination is larger than a predetermined threshold value. As an example, the stage control unit 33 may change or adjust the height (Z direction) of the stage 4 when the calculated inclination is larger than a predetermined threshold value.
  • the signal sequence extraction unit 31 determines the reference focal position in all regions other than the three regions. As an example, if the three reference focal positions are the same, the signal sequence extraction unit 31 may determine all the areas other than the three areas as the same reference focal position. In this case, the subsequent steps 2112 to 2114 are the same as steps 1110 to 1112 in FIG. Therefore, the description is omitted.
  • the signal sequence extraction unit 31 may determine each peripheral region as the same reference focus position. For example, if the reference focal position in the area including the first vertex 1a of the irradiated object 1 is Z 0 , the signal sequence extraction unit 31 sets the reference focal positions of several areas around the area as Z 0 . You may judge. Referring focal position in the region including the second apex 1b of the irradiation object 1 is to Z 1, the signal sequence extraction unit 31 determines a reference focus position of some regions of the periphery of the area with Z 1 May be. Referring focal position in the region including the third apex 1c of the irradiation object 1 is to Z 2, the signal sequence extraction unit 31 determines a reference focus position of some regions of the periphery of the region and Z 2 May be.
  • the signal sequence extraction unit 31 determines reference focal positions in all regions other than the three regions according to the calculated inclination of the irradiated object 1.
  • the signal sequence extraction unit 31 obtains a change amount in the Z direction between adjacent regions according to the calculated inclination, and determines a reference focal position based on the change amount.
  • the value in the Z direction in each area calculated on the basis of the amount of change is less than or equal to the intermediate value of Z 0 and Z 1
  • a reference focus position determined to Z 0, the Z 0 and Z 1 larger than the intermediate value is a reference focus position may be determined to Z 1.
  • FIG. 25 is a diagram for explaining the processing in steps 2212 and 2213 when the reference focal positions are different in a plurality of regions of the irradiated object 1.
  • FIG. 25 shows a set of reflected light pixel signals, a set of first pixel signals, and a set of second pixel signals.
  • the signal string extraction unit 31 acquires information on the first fluorescence chromatic aberration amount ⁇ Z 1 from the chromatic aberration information storage unit 34.
  • the signal sequence extraction unit 31 uses the first fluorescence chromatic aberration amount ⁇ Z 1 from the reference focal position for each region of the irradiated object 1, and the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence. Is extracted (2212).
  • a set of each pixel signal is acquired at a plurality of different focal points Z ⁇ 4 to Z 4 .
  • the reference focal position of a plurality of areas including the first vertex 1a of the irradiated object 1 is Z 0
  • the second vertex 1b of the irradiated object 1 is Referring focal position of a plurality of regions (central region of the irradiated object 1) containing it is determined that Z 1, a plurality of regions (left end in the region of the irradiation object 1 including a third apex 1c of the irradiation object 1 ) reference focal position in it is assumed to have been determined as Z 2.
  • the shaded portion 2501 in FIG. 25 indicates the reference focal position for each region.
  • the signal sequence extraction unit 31 For a plurality of areas including a first apex 1a of the irradiation object 1, the signal sequence extraction unit 31, the first fluorescence from the reference focus position Z 0 with chromatic aberration amount [Delta] Z 1, of the first pixel signal from the set, to acquire a signal sequence of the pixel signals of the focus position Z 2 (hatched portion of Z 2 of 2502) (arrow 2511).
  • the signal sequence extraction unit 31 For a plurality of regions including a second vertex 1b of the irradiation object 1, the signal sequence extraction unit 31, the first fluorescence from the reference focus position Z 1 using chromatic aberration amount [Delta] Z 1, of the first pixel signal from the set, to acquire a signal sequence of the pixel signals of the focus position Z 3 (shaded portion of Z 3 of 2502) (arrow 2512).
  • the signal sequence extraction unit 31 For a plurality of areas including a third apex 1c of the irradiation object 1, the signal sequence extraction unit 31, the first fluorescence from the reference focus position Z 2 using chromatic aberration amount [Delta] Z 1, of the first pixel signal from the set, to acquire a signal sequence of the pixel signals of the focus position Z 4 (shaded portion of Z 4 of 2502) (arrow 2513).
  • the combination of the signal sequences extracted as described above corresponds to a signal sequence representing a fluorescence image at the focal point of the first fluorescence.
  • the signal sequence extraction unit 31 acquires information on the second fluorescence chromatic aberration amount ⁇ Z 2 from the chromatic aberration information storage unit 34.
  • the signal string extraction unit 31 uses the second fluorescence chromatic aberration amount ⁇ Z 2 from the reference focal position for each region of the irradiated object 1, and the signal string of the pixel signal corresponding to the focal point of the second fluorescence. Is extracted (2213).
  • the signal sequence extraction unit 31 For a plurality of areas including a first apex 1a of the irradiation object 1, the signal sequence extraction unit 31, the second fluorescence from the reference focus position Z 0 with chromatic aberration amount [Delta] Z 2, of the second pixel signal From the set, the signal sequence of the pixel signal at the focal position Z- 2 (the shaded portion of Z- 2 in 2503) is acquired (arrow 2521).
  • the signal sequence extraction unit 31 For a plurality of regions including a second vertex 1b of the irradiation object 1, the signal sequence extraction unit 31, the first fluorescence from the reference focus position Z 1 using chromatic aberration amount [Delta] Z 2, of the second pixel signal From the set, a signal sequence of the pixel signal at the focal position Z ⁇ 1 (the shaded portion of Z ⁇ 1 in 2503) is acquired (arrow 2522).
  • the signal sequence extraction unit 31 For a plurality of areas including a third apex 1c of the irradiation object 1, the signal sequence extraction unit 31, the second fluorescence from the reference focus position Z 2 using chromatic aberration amount [Delta] Z 2, of the second pixel signal From the set, the signal sequence of the pixel signal at the focal position Z 0 (the shaded portion of Z 0 in 2503) is acquired (arrow 2523).
  • the combination of the signal sequences extracted as described above corresponds to a signal sequence representing a fluorescence image at the focal point of the second fluorescence.
  • the signal sequence processing unit 32 performs first signal processing on the signal sequence of the pixel signal corresponding to the extracted first fluorescence focal point, and extracts the extracted second fluorescence focal point. Second signal processing is executed for the signal sequence of pixel signals corresponding to (2214). Note that the content of step 2214 is the same as that of step 1112 in FIG.
  • FIG. 26 is another example of three points for calculating the reference focal position.
  • the three points for calculating the reference focal position may be defined by isosceles triangles connecting the two vertices of the irradiated object 1 and the midpoints of the sides of the irradiated object 1. If these three points are determined, the inclination of the plane can be specified.
  • the reference focus position of the two vertices of the irradiation object 1 is determined to Z 1
  • reference focus position at the midpoint of the irradiated body 1 side is determined to Z 0.
  • this embodiment may be executed in any of the flowcharts of FIG. 10, FIG. 16, and FIG.
  • the signal sequence at the focal point of the first fluorescence and the signal sequence at the focal point of the second fluorescence can be acquired while determining the inclination of the irradiated object 1. . It is also possible to automatically detect that the irradiated object 1 is inclined more than a certain level and adjust the inclination of the irradiated object 1. Conventionally, the inclination of the irradiated object 1 cannot be determined. Conventionally, since the fluorescence image is acquired in accordance with a specific focal position (that is, one focal position) using the AF function, when the irradiated body is inclined, all of the irradiated body is There was a possibility that information at the focal point could not be obtained in the area.
  • the tilt of the irradiated object 1 it is possible to determine the tilt of the irradiated object 1 and determine the focal point position for each region of the irradiated object 1. Even when the reference focal position is different in each region of the irradiated object 1, a signal sequence of pixel signals corresponding to the focal point can be acquired.
  • this embodiment is applicable also to the structure which radiates
  • the signal string extraction unit 31 obtains a change in contrast with respect to at least three regions of the irradiated object 1, What is necessary is just to determine a peak.
  • the signal sequence extraction unit 31 may determine one of the three peaks as the reference focal position using the chromatic aberration information.
  • the light source device 2 emits the first excitation light and the second excitation light without emitting the reference light.
  • the light source device 2 emits the first excitation light and the second excitation light in the order of the first excitation light and the second excitation light.
  • the emission order of the first excitation light and the second excitation light is not limited to this, and may be the order of the second excitation light and the first excitation light.
  • the signal string extraction unit 31 acquires from the sensor 5 a set of first pixel signals at a plurality of different focal points and a set of second pixel signals at a plurality of different focal points. Next, as an example, the signal string extraction unit 31 determines the focal point position of the first fluorescence from a set of first pixel signals at a plurality of different focal points. As an example, the signal sequence extraction unit 31 determines the focal position of the pixel signal having the highest luminance value as the focal point position of the first fluorescence from the set of first pixel signals at a plurality of different focal points. If a pixel signal having a sufficient luminance value is obtained, the focal position where the luminance value is maximum can be regarded as the focal point position of the first fluorescence. The signal sequence extraction unit 31 acquires a signal sequence of pixel signals corresponding to the in-focus position of the first fluorescence determined here.
  • the chromatic aberration information storage unit 34 includes information indicating the relationship of the chromatic aberration amount between the first fluorescence and the second fluorescence. Using this relationship, it is possible to obtain the in-focus position of the second fluorescence from the in-focus position of the first fluorescence.
  • the signal sequence extraction unit 31 uses the amount of chromatic aberration of the second fluorescence from the focus position of the first fluorescence to extract the signal sequence of the pixel signal corresponding to the focus position of the second fluorescence.
  • the signal sequence of the pixel signals extracted here substantially corresponds to a signal sequence that represents a fluorescence image at the focal point of the second fluorescence.
  • the signal string extraction unit 31 may determine that a pixel signal having a sufficient luminance value has not been obtained when the highest luminance value is smaller than a predetermined threshold value. Depending on the specimen, fluorescence may not be emitted from the irradiated object 1. Information that a pixel signal having a sufficient luminance value cannot be obtained can also be effective information for the user.
  • the signal sequence corresponding to the focal point of the first fluorescence and the focal point of the second fluorescence are not performed without performing focusing by the AF function.
  • a corresponding signal sequence can be acquired.
  • the signal sequence extraction unit 31 determines the focal point position of the second fluorescence from the set of pixel signals of the second fluorescence, and the first fluorescence from the focal point position of the second fluorescence.
  • the signal sequence of the first fluorescence pixel signal may be extracted using the amount of chromatic aberration.
  • the measurement apparatus 10 may provide a separation unit that separates the first fluorescence and the second fluorescence between the irradiated object 1 and the microlens array 8.
  • the measuring apparatus 10 may include a third objective lens, a second microlens array, and a second sensor.
  • the second microlens array and the second sensor are arranged in this order near the focal plane of the third objective lens.
  • the configuration of the second microlens array is the same as that of the microlens array 8.
  • the configuration of the second sensor is the same as that of the sensor 5.
  • the separation unit is configured to guide the first fluorescence to the sensor 5 side and guide the second fluorescence to the second sensor.
  • the signal sequence extraction unit 31 may obtain a contrast from a set of pixel signals at a plurality of different focal positions and determine two contrast peaks.
  • FIG. 27 is a diagram illustrating a measurement system (screening apparatus) including the measurement apparatus 10 according to the above-described embodiment.
  • the measurement system is a system that automatically performs the measurement method of the irradiated object 1 described above.
  • the measurement system 100 includes a pretreatment device (reaction device, bioassay device) 101, a transport device (plate loader) 102, and a measurement device 103.
  • the pretreatment apparatus 101 is a bioassay apparatus that prepares an object 1 to be measured.
  • the pretreatment apparatus 101 injects a specimen (target) containing a labeled target into the irradiated object 1 (biomolecule arranged in the spot), and is specific to the biomolecule and the target. It is an apparatus for carrying out the reaction.
  • the pretreatment device 101 includes a stage device that supports a support member 60 (for example, a plate), a dispensing device that includes a dispensing nozzle that injects a sample into each spot, and an irradiation target after the sample is injected.
  • a cleaning device for cleaning the body for cleaning the body.
  • the pretreatment device 101 may include a drying device that dries the irradiated object after cleaning.
  • the pre-processing apparatus 101 may be configured to process the support members 60 one by one or may be configured to process a plurality of sheets simultaneously.
  • the transport device 102 is a transport mechanism that transports the support member 60 from the pretreatment device 101 to the measurement device 103.
  • a known transfer robot device can be used as the transfer device 102.
  • the transport apparatus 102 unloads the support member 60 from the stage apparatus of the pretreatment apparatus 101 and loads it into the measurement apparatus 103.
  • the transport apparatus 102 may be provided with a mechanism for temporarily waiting for the support member 60 after cleaning.
  • the measuring apparatus 103 includes the measuring apparatus 10 according to the above-described embodiment.
  • the measuring device 103 measures the irradiated object 1 of the support member 60 disposed on the stage 4 by the transport device 102.
  • the measurement process by the measurement apparatus 103 is as described above.
  • the conveyance device 102 carries out the support member 60 whose measurement has been completed from the stage 4 and conveys it to a predetermined position.
  • the pretreatment (reaction process, bioassay) for the irradiated object 1 and the measurement process for the irradiated object 1 after the pretreatment (reaction process, bioassay) are continuously performed.
  • Molecular arrays can be screened.
  • the control device 30 may include a switching processing unit for switching the processes of the first to fourth embodiments described above.
  • the switching processing unit can select one of the processes in the first to fourth embodiments.
  • the processing of the control device 30 described above can also be realized by software program codes that realize these functions.
  • a storage medium in which the program code is recorded is provided to the system or apparatus, and the computer (or CPU or MPU) of the system or apparatus reads the program code stored in the storage medium.
  • the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the program code itself and the storage medium storing the program code constitute the present invention.
  • a storage medium for supplying such program code for example, a flexible disk, CD-ROM, DVD-ROM, hard disk, optical disk, magneto-optical disk, CD-R, magnetic tape, nonvolatile memory card, ROM Etc. are used.
  • control lines and information lines are those that are considered necessary for the explanation, and not all control lines and information lines on the product are necessarily shown. All the components may be connected to each other.
  • stage control unit 34 ... chromatic aberration information storage unit, 40 DESCRIPTION OF SYMBOLS ... Display apparatus, 51 ... Control line, 60 ... Support member, 81 ... Micro lens, 100 ... Measurement system, 101 ... Pre-processing apparatus, 102 ... Conveyance apparatus, 103 ... Measurement apparatus

Landscapes

  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The objective of the present invention is to provide a measurement apparatus, a measurement system, a signal string processing method, and a program which are capable of improving throughput. The measurement apparatus is provided with a light source device to emit first excitation light and second excitation light to an irradiation object, a micro lens array provided with a plurality of micro lenses arranged two-dimensionally, a sensor which has light receiving elements arranged two-dimensionally, and which receives, via the micro lens array, first fluorescent light generated when the first excitation light is emitted to the irradiation object, and second fluorescent light generated when the second excitation light is emitted to the irradiation object, and a control unit which generates a first result from the signal string of a first pixel signal corresponding to the focal point of the first fluorescent light, and which generates a second result from the signal string of a second pixel signal corresponding to the focal point of the second fluorescent light.

Description

測定装置、測定システム、信号列処理方法、プログラムMeasuring apparatus, measuring system, signal sequence processing method, program
 本発明は、測定装置、測定システム、信号列処理方法、及びプログラムに関する。 The present invention relates to a measuring apparatus, a measuring system, a signal sequence processing method, and a program.
 特許文献1は、オートフォーカス機能(以下、「AF機能」と称する)を提供するためのダイクロイックミラー及び合焦信号生成部を備える顕微鏡装置を開示している。 Patent Document 1 discloses a microscope apparatus including a dichroic mirror and an in-focus signal generator for providing an autofocus function (hereinafter referred to as “AF function”).
特開2010-66356号公報JP 2010-66356 A
 上述した従来技術には、以下の課題が存在する。一例として、上述の顕微鏡装置などの撮像装置を用いて、測定対象から、波長が異なる2つの蛍光を検出し、それぞれの蛍光画像を取得する場合を想定する。一例として、2つの蛍光を赤の光及び緑の光とする。波長が異なる2つの蛍光(赤の光及び緑の光)の画像を取得する場合、色収差により焦点位置が異なることが知られている。したがって、従来の撮像装置では、赤の光の画像を取得する場合、まず、AF機能を用いて、赤の光の色収差量を考慮した焦点位置合わせを実施する。次に、緑の光の画像を取得する場合、AF機能を用いて、緑の光の色収差量を考慮した焦点位置合わせを実施する。このように、波長が異なる2つの蛍光は互いに色収差量が異なるため、画像を取得する毎に焦点合わせが必要となる。撮像装置を用いて多くの画像を取得する場合、AF機能を用いた焦点合わせの所要時間が大きくなり、撮像装置での画像取得のスループットに大きな影響を与える。 The following problems exist in the conventional technology described above. As an example, a case is assumed in which two fluorescences having different wavelengths are detected from a measurement target using an imaging apparatus such as the above-described microscope apparatus, and respective fluorescence images are acquired. As an example, let two fluorescences be red light and green light. When acquiring images of two fluorescent lights (red light and green light) having different wavelengths, it is known that the focal positions differ due to chromatic aberration. Therefore, in the conventional imaging apparatus, when an image of red light is acquired, first, focus alignment is performed in consideration of the amount of chromatic aberration of red light using the AF function. Next, when an image of green light is acquired, focus alignment is performed in consideration of the amount of chromatic aberration of green light using the AF function. As described above, the two fluorescent lights having different wavelengths have different amounts of chromatic aberration, so that focusing is required every time an image is acquired. When many images are acquired using an imaging apparatus, the time required for focusing using the AF function is increased, which greatly affects the throughput of image acquisition in the imaging apparatus.
 本発明の一態様によれば、第1の励起光及び第2の励起光を被照射体に照射するための光源装置と、二次元状に配列された複数のマイクロレンズを備えるマイクロレンズアレイと、受光素子が二次元に配列されたセンサであって、前記第1の励起光を前記被照射体に照射したときに発した第1の蛍光と、前記第2の励起光を前記被照射体に照射したときに発した第2の蛍光とを前記マイクロレンズアレイを介して受光するセンサと、前記第1の蛍光の合焦点に対応する第1の画素信号の信号列から第1の結果を生成し、前記第2の蛍光の合焦点に対応する第2の画素信号の信号列から第2の結果を生成する制御部と、を備える測定装置が提供される。 According to one aspect of the present invention, a light source device for irradiating an irradiated body with the first excitation light and the second excitation light, and a microlens array including a plurality of microlenses arranged two-dimensionally A sensor in which light receiving elements are two-dimensionally arranged, and the first fluorescence emitted when the first excitation light is irradiated onto the irradiated body and the second excitation light that is irradiated with the first excited light. The first result is obtained from the sensor that receives the second fluorescence emitted when the light is irradiated through the microlens array and the signal sequence of the first pixel signal corresponding to the focal point of the first fluorescence. And a control unit that generates and generates a second result from the signal sequence of the second pixel signal corresponding to the focal point of the second fluorescence.
 本発明の他の態様によれば、上述の測定装置と、支持部材に支持された前記被照射体を検体と反応させる反応装置と、前記支持部材を前記測定装置に搬送する搬送装置と、を備える測定システムが提供される。 According to another aspect of the present invention, the measurement device described above, a reaction device that reacts the irradiated object supported by a support member with a sample, and a transport device that transports the support member to the measurement device. A measurement system is provided.
 本発明の他の態様によれば、複数の被照射体が配置された支持部材をステージに配置するステップと、光源装置によって、第1の励起光及び第2の励起光を前記被照射体に照射するステップと、前記第1の励起光を前記被照射体に照射したときに発した第1の蛍光と、前記第2の励起光を前記被照射体に照射したときに発した第2の蛍光とを、二次元状に配列された複数のマイクロレンズを備えるマイクロレンズアレイを介して、受光素子が二次元に配列されたセンサで受光するステップと、前記第1の蛍光の合焦点に対応する第1の画素信号の信号列から第1の結果を生成し、前記第2の蛍光の合焦点に対応する第2の画素信号の信号列から第2の結果を生成するステップとを含む信号列処理方法が提供される。 According to another aspect of the present invention, the first excitation light and the second excitation light are applied to the irradiated object by the step of disposing the support member on which the plurality of irradiated objects are disposed on the stage and the light source device. An irradiation step; a first fluorescence emitted when the irradiated body is irradiated with the first excitation light; and a second fluorescence emitted when the irradiated body is irradiated with the second excitation light. Corresponding to a step of receiving light with a sensor in which a light receiving element is arranged two-dimensionally through a microlens array including a plurality of microlenses arranged two-dimensionally, and the focal point of the first fluorescence Generating a first result from the signal sequence of the first pixel signal, and generating a second result from the signal sequence of the second pixel signal corresponding to the focal point of the second fluorescence. A column processing method is provided.
 本発明の他の態様によれば、演算部及び記憶部を少なくとも備える情報処理装置に、マイクロレンズアレイを介して受光した光の信号列を処理させるためのプログラムであって、前記光の信号列は、複数の異なる焦点位置における画素信号の集合であって、第1の蛍光が出ているであろう時間帯の間に取得された画素信号と第2の蛍光が出ているであろう時間帯の間に取得された画素信号とを含み、前記演算部に、前記複数の異なる焦点位置における画素信号の集合から、前記第1の蛍光の合焦点に対応する第1の画素信号の信号列及び前記第2の蛍光の合焦点に対応する第2の画素信号の信号列を取得する処理と、前記第1の画素信号の信号列から第1の結果を生成し、前記第2の画素信号の信号列から第2の結果を生成する処理とを実行させるためのプログラムが提供される。 According to another aspect of the present invention, there is provided a program for causing an information processing apparatus including at least a calculation unit and a storage unit to process a signal sequence of light received through a microlens array, the signal sequence of the light Is a set of pixel signals at a plurality of different focal positions, and the pixel signal acquired during the time zone in which the first fluorescence will be emitted and the time in which the second fluorescence will be emitted. A signal sequence of a first pixel signal corresponding to the focal point of the first fluorescence from the set of pixel signals at the plurality of different focal positions. And a process of obtaining a signal sequence of a second pixel signal corresponding to the focal point of the second fluorescence, a first result is generated from the signal sequence of the first pixel signal, and the second pixel signal To generate a second result from the signal sequence of Program for causing a is provided.
第1実施形態に係る測定装置の構成を示す図。The figure which shows the structure of the measuring apparatus which concerns on 1st Embodiment. 支持部材及び被照射体の一例を示す図。The figure which shows an example of a supporting member and a to-be-irradiated body. フィルタブロックの一例を示す図。The figure which shows an example of a filter block. マイクロレンズアレイとセンサとの配置を示す図。The figure which shows arrangement | positioning of a micro lens array and a sensor. 焦点位置の異なる複数の画像を取得した一例を示す図。The figure which shows an example which acquired the some image from which a focus position differs. マイクロレンズアレイが対物レンズの結像面(Z=0)に配置されている場合の画素信号の抽出処理を説明する図。The figure explaining the extraction process of a pixel signal in case a microlens array is arrange | positioned at the imaging surface (Z = 0) of an objective lens. 対物レンズの結像面がマイクロレンズアレイからずれた位置(Z=h1)にある場合の画素信号の抽出処理を説明する図。The figure explaining the extraction process of a pixel signal in case the image plane of an objective lens exists in the position (Z = h1) shifted | deviated from the micro lens array. 蛍光の色収差量を説明する図。The figure explaining the amount of chromatic aberration of fluorescence. 制御装置の構成要素の一例を示す図。The figure which shows an example of the component of a control apparatus. 第1実施形態に係る測定装置によって被照射体を測定する際のフローチャートの一例。An example of the flowchart at the time of measuring a to-be-irradiated body with the measuring apparatus which concerns on 1st Embodiment. 図10のフローチャートの詳細な内容の一例。An example of the detailed content of the flowchart of FIG. 参照焦点位置を判定する手順を説明する図。The figure explaining the procedure which determines a reference focus position. 参照焦点位置を判定する手順を説明する図。The figure explaining the procedure which determines a reference focus position. 第1の蛍光の合焦点に対応する画素信号の信号列を取得する一例。An example which acquires the signal sequence of the pixel signal corresponding to the focal point of the 1st fluorescence. 第2の蛍光の合焦点に対応する画素信号の信号列を取得する一例。An example which acquires the signal sequence of the pixel signal corresponding to the focal point of the 2nd fluorescence. 測定装置によって被照射体を測定する際のフローチャートの別の例。The other example of the flowchart at the time of measuring a to-be-irradiated body with a measuring apparatus. 測定装置によって被照射体を測定する際のフローチャートの別の例。The other example of the flowchart at the time of measuring a to-be-irradiated body with a measuring apparatus. 第2実施形態に係る測定装置によって被照射体を測定する際のフローチャートの一例。An example of the flowchart at the time of measuring a to-be-irradiated body with the measuring apparatus which concerns on 2nd Embodiment. 参照焦点位置を判定する手順を説明する図。The figure explaining the procedure which determines a reference focus position. 第1の蛍光の合焦点に対応する画素信号の信号列及び第2の蛍光の合焦点に対応する画素信号の信号列を取得する一例。An example of acquiring a signal sequence of pixel signals corresponding to the focal point of the first fluorescence and a signal sequence of pixel signals corresponding to the focal point of the second fluorescence. 第2実施形態に係る測定装置の構成の別の例である。It is another example of a structure of the measuring apparatus which concerns on 2nd Embodiment. 第3実施形態に係る測定装置によって被照射体を測定する際のフローチャートの一例。An example of the flowchart at the time of measuring a to-be-irradiated body with the measuring apparatus which concerns on 3rd Embodiment. 参照焦点位置を判定する手順を説明する図。The figure explaining the procedure which determines a reference focus position. 被照射体の傾きと合焦点位置との関係を説明する図。The figure explaining the relationship between the inclination of a to-be-irradiated body, and a focus position. 第1の蛍光の合焦点に対応する画素信号の信号列及び第2の蛍光の合焦点に対応する画素信号の信号列を取得する一例。An example of acquiring a signal sequence of pixel signals corresponding to the focal point of the first fluorescence and a signal sequence of pixel signals corresponding to the focal point of the second fluorescence. 参照焦点位置を判定する3つの点の別の例。Another example of three points for determining the reference focal position. 第5実施形態に係る測定システムの構成を示す図。The figure which shows the structure of the measurement system which concerns on 5th Embodiment.
 以下、添付図面を参照して本発明の実施形態について説明する。なお、添付図面は本発明の原理に則った具体的な実施形態を示しているが、これらは本発明の理解のためのものであり、決して本発明を限定的に解釈するために用いられるものではない。 Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. The accompanying drawings show specific embodiments in accordance with the principle of the present invention, but these are for the understanding of the present invention, and are never used to limit the interpretation of the present invention. is not.
 以下の説明においては、XYZ直交座標系を設定し、このXYZ直交座標系を参照しつつ各部材の位置関係について説明する。そして、水平面内の所定方向をX軸方向、水平面内においてX軸方向と直交する方向をY軸方向、X軸方向及びY軸方向のそれぞれに直交する方向(すなわち鉛直方向)をZ軸方向とする。また、X軸、Y軸、及びZ軸まわりの回転(傾斜)方向をそれぞれ、θX、θY、及びθZ方向とする。 In the following description, an XYZ orthogonal coordinate system is set, and the positional relationship of each member will be described with reference to this XYZ orthogonal coordinate system. The predetermined direction in the horizontal plane is the X-axis direction, the direction orthogonal to the X-axis direction in the horizontal plane is the Y-axis direction, and the direction orthogonal to each of the X-axis direction and the Y-axis direction (that is, the vertical direction) is the Z-axis direction. To do. Further, the rotation (inclination) directions around the X axis, Y axis, and Z axis are the θX, θY, and θZ directions, respectively.
[第1実施形態]
 測定装置の第1実施形態について、図1~図17を参照して説明する。図1は、測定装置の一例を示す概略構成図である。一例として、測定装置は、光学顕微鏡である。測定装置は、他の光学装置又は撮像装置でもよい。
[First Embodiment]
A first embodiment of a measuring apparatus will be described with reference to FIGS. FIG. 1 is a schematic configuration diagram illustrating an example of a measurement apparatus. As an example, the measuring device is an optical microscope. The measuring device may be another optical device or an imaging device.
 測定装置10は、被照射体(測定対象)1を観察する測定装置本体20と、測定装置本体20の動作を制御する制御装置30と、制御装置30に接続された表示装置40とを備えている。一例として、制御装置30は、コンピュータシステムを含む。一例として、コンピュータシステムは、プロセッサと、メモリと、記憶装置とを少なくとも備えている。一例として、メモリは、揮発性メモリであり、記憶装置は、ハードディスクなどの不揮発性ストレージである。一例として、コンピュータシステムは、キーボード及びポインティングデバイス(例えば、マウス)のような入力装置を含んでもよい。一例として、表示装置40は、液晶ディスプレイのようなフラットパネルディスプレイを含む。 The measurement device 10 includes a measurement device main body 20 that observes an object to be irradiated (measurement target) 1, a control device 30 that controls the operation of the measurement device main body 20, and a display device 40 that is connected to the control device 30. Yes. As an example, the control device 30 includes a computer system. As an example, the computer system includes at least a processor, a memory, and a storage device. As an example, the memory is a volatile memory, and the storage device is a nonvolatile storage such as a hard disk. As an example, a computer system may include input devices such as a keyboard and a pointing device (eg, a mouse). As an example, the display device 40 includes a flat panel display such as a liquid crystal display.
 測定装置本体20は、光源装置2と、光学システム3と、ステージ4と、センサ5とを備えている。一例として、測定装置本体20は、ボディ(図示省略)を備えている。光源装置2と、光学システム3と、ステージ4と、センサ5のそれぞれは、ボディに支持される。 The measuring device main body 20 includes a light source device 2, an optical system 3, a stage 4, and a sensor 5. As an example, the measuring apparatus main body 20 includes a body (not shown). Each of the light source device 2, the optical system 3, the stage 4, and the sensor 5 is supported by the body.
 光源装置2は、複数の異なる波長の光を射出可能である。一例として、光源装置2は、被照射体1から蛍光を発生させるために励起光を射出可能である。一例として、光源装置2は、被照射体1から蛍光を発生させるための複数の異なる波長の励起光と、被照射体1からの反射光を得るための参照光とを射出可能である。一例として、光源装置2は、第1の励起光としての波長λ1の光、第2の励起光としての波長λ2の光、及び、参照光としての波長λの光を射出可能である。一例として、光源装置2は、第1の励起光としての波長λ1の光、第2の励起光としての波長λ2の光、及び、参照光としての波長λの光を、制御装置30からの信号に基づいて選択的に切り替えて射出可能である。一例として、光源装置2は、第1の励起光としての波長λ1の光と、第2の励起光としての波長λ2の光と、参照光としての波長λの光とをともに射出可能である。波長λ1と波長λ2と波長λとは、互いに異なる波長である。なお、光源装置2は、第3の蛍光に対応する第3の励起光を射出してもよい。さらに、光源装置2は、第4の蛍光に対応する第4の励起光を射出してもよい。 The light source device 2 can emit light having a plurality of different wavelengths. As an example, the light source device 2 can emit excitation light to generate fluorescence from the irradiated object 1. As an example, the light source device 2 can emit a plurality of excitation light beams having different wavelengths for generating fluorescence from the irradiated object 1 and reference light for obtaining reflected light from the irradiated object 1. As an example, the light source unit 2, the light of wavelength λ1 as a first excitation light, light of the wavelength λ2 of the second excitation light, and the light of the wavelength lambda R of the reference light can be emitted. As an example, the light source unit 2, the light of wavelength λ1 as a first excitation light, light of the wavelength λ2 of the second excitation light, and, in the wavelength lambda R of the reference light light, from the control unit 30 It is possible to selectively switch injection based on the signal. As an example, the light source device 2 includes a light of wavelength λ1 as a first excitation light, are both capable of emitting a light of the wavelength .lambda.2, the wavelength lambda R of the reference light and the light of the second excitation light . The wavelength λ1 and the wavelength λ2 and the wavelength lambda R, is a different wavelength from each other. The light source device 2 may emit third excitation light corresponding to the third fluorescence. Furthermore, the light source device 2 may emit the fourth excitation light corresponding to the fourth fluorescence.
 一例として、ステージ4には、複数の被照射体1が配列された支持部材60が配置される。一例として、支持部材60は、プレートである。一例として、支持部材60は、板状の部材である。一例として、被照射体1は、バイオチップである。バイオチップは、マイクロアレイ、マイクロアレイチップ、生体分子アレイ、バイオセンサーなどと呼ばれる場合もある。図2は、支持部材60及び被照射体1の一例である。一例として、支持部材60は、ガラスプレートである。ガラスプレートには、複数のバイオチップがマトリクス状に固定されている。一例として、バイオチップは、接着材によりガラスプレートに接着されている。一例として、バイオチップは、複数のスポットを有している。一例として、バイオチップは、マトリクス状に配列された複数のスポットを有している。一例として、スポットには生体分子(プローブ)が固定されている。一例として、バイオチップにおける複数のスポットには、互いに異なる生体分子が固定されている。各スポットには、そのスポットを識別できるようにアドレスが設定されている。アドレスの情報は、例えば制御装置30の記憶装置に記憶されている。 As an example, a support member 60 in which a plurality of irradiated objects 1 are arranged is arranged on the stage 4. As an example, the support member 60 is a plate. As an example, the support member 60 is a plate-like member. As an example, the irradiated object 1 is a biochip. The biochip is sometimes called a microarray, a microarray chip, a biomolecule array, a biosensor, or the like. FIG. 2 is an example of the support member 60 and the irradiated object 1. As an example, the support member 60 is a glass plate. A plurality of biochips are fixed in a matrix on the glass plate. As an example, the biochip is bonded to the glass plate with an adhesive. As an example, the biochip has a plurality of spots. As an example, the biochip has a plurality of spots arranged in a matrix. As an example, a biomolecule (probe) is fixed to the spot. As an example, different biomolecules are fixed to a plurality of spots on the biochip. Each spot is set with an address so that the spot can be identified. The address information is stored in the storage device of the control device 30, for example.
 被照射体1は、励起光が照射されることで蛍光を発する。一例として、被照射体1は、被照射体1の生体分子に結合した蛍光色素に励起光が照射されることによって蛍光を発生する。一例として、被照射体1は、第1の励起光(波長:λ1)が照射されることによって第1の蛍光(波長:λ1´)を発生する。一例として、被照射体1は、第2の励起光(波長:λ2)が照射されることによって第2の蛍光(波長:λ2´)を発生する。参照光(反射光)の波長λは、被照射体1から蛍光を発生させる波長ではない。一例として、参照光の波長λは、被照射体1の生体分子に結合している蛍光色素を励起させる波長ではない。一例として、参照光の波長λは、第1の蛍光の波長のλ1´と第2の蛍光の波長のλ2´のいずれとも異なる。一例として、参照光の波長λは、第1の蛍光の波長のλ1´と第2の蛍光の波長のλ2´の間の波長が好ましい。参照光の波長λは、第1の蛍光と第2の蛍光の波長のどちらにも近く、かつ、第1の蛍光と第2の蛍光に対して影響がないよう重ならないことが好ましい。 The irradiated object 1 emits fluorescence when irradiated with excitation light. As an example, the irradiated object 1 generates fluorescence by irradiating the fluorescent dye bonded to the biomolecule of the irradiated object 1 with excitation light. As an example, the irradiated object 1 generates the first fluorescence (wavelength: λ1 ′) when irradiated with the first excitation light (wavelength: λ1). As an example, the irradiated object 1 generates second fluorescence (wavelength: λ2 ′) when irradiated with second excitation light (wavelength: λ2). The wavelength λ R of the reference light (reflected light) is not a wavelength for generating fluorescence from the irradiated object 1. As an example, the wavelength λ R of the reference light is not a wavelength that excites the fluorescent dye that is bound to the biomolecule of the irradiated object 1. As an example, the wavelength lambda R of the reference light, different than either of λ2' wavelengths λ1' and second fluorescence wavelength of the first fluorescent. As an example, the wavelength λ R of the reference light is preferably a wavelength between λ 1 ′ of the first fluorescence wavelength and λ 2 ′ of the second fluorescence wavelength. It is preferable that the wavelength λ R of the reference light is close to both the first fluorescence and the second fluorescence, and does not overlap so as not to affect the first fluorescence and the second fluorescence.
 ステージ4は、支持部材60を支持する。一例として、ステージ4は、支持部材60を支持した状態で移動可能である。一例として、ステージ4は、支持部材60を支持した状態でX軸方向、Y軸方向、及びZ軸方向のそれぞれに移動可能である。支持部材60は、被照射体1の表面(生体分子が固定されている面)が光学システム3の第1の対物レンズ16と対向するようにステージ4に支持される。ステージ4と制御装置30とは、制御線51で接続されている。制御装置30は、支持部材60を支持するステージ4をX軸方向、Y軸方向、及びZ軸方向のそれぞれに移動させることが可能である。一例として、測定装置10は、ステージ4上の支持部材60上の全ての被照射体1を測定すると、次の支持部材60に取り替えて、順次測定を行う。なお、一例として、ステージ4は、支持部材60を支持した状態でθX、θY、及びθZ方向に回転可能なステージで構成してもよい。 Stage 4 supports support member 60. As an example, the stage 4 is movable while supporting the support member 60. As an example, the stage 4 is movable in each of the X-axis direction, the Y-axis direction, and the Z-axis direction while supporting the support member 60. The support member 60 is supported by the stage 4 so that the surface of the irradiated object 1 (the surface on which the biomolecule is fixed) faces the first objective lens 16 of the optical system 3. The stage 4 and the control device 30 are connected by a control line 51. The control device 30 can move the stage 4 that supports the support member 60 in each of the X-axis direction, the Y-axis direction, and the Z-axis direction. As an example, when all the irradiated objects 1 on the support member 60 on the stage 4 are measured, the measurement apparatus 10 replaces the next support member 60 and sequentially performs measurement. As an example, the stage 4 may be configured by a stage that can rotate in the θX, θY, and θZ directions while supporting the support member 60.
 光学システム3は、照射光学系6と、結像光学系7と、マイクロレンズアレイ(MLA)8とを備えている。照射光学系6は、光源装置2から放出された光を被照射体1に照射するための構成要素を含む。一例として、照射光学系6は、第1のレンズ11と、明るさ絞り(AS)12と、視野絞り(FS)13と、第2のレンズ14と、フィルタブロック15と、第1の対物レンズ16とを備えている。光源装置2から放出された光は、第1のレンズ11、明るさ絞り12、視野絞り13、及び、第2のレンズ14を通過し、フィルタブロック15に入射する。 The optical system 3 includes an irradiation optical system 6, an imaging optical system 7, and a microlens array (MLA) 8. The irradiation optical system 6 includes components for irradiating the irradiated body 1 with light emitted from the light source device 2. As an example, the irradiation optical system 6 includes a first lens 11, a brightness stop (AS) 12, a field stop (FS) 13, a second lens 14, a filter block 15, and a first objective lens. 16. The light emitted from the light source device 2 passes through the first lens 11, the brightness stop 12, the field stop 13, and the second lens 14 and enters the filter block 15.
 図3は、フィルタブロックの一例を示す模式図である。フィルタブロック15は、光源装置2からの光が入射する第1フィルタ(第1波長選択部)17と、第1フィルタ17を介した光が入射するダイクロイックミラー18と、ダイクロイックミラー18からの光が入射する第2フィルタ(第2波長選択部)19とを備えている。一例として、フィルタブロック15は、励起フィルタと、ダイクロイックミラーと、吸収フィルタとが一体で構成される蛍光フィルタブロックである。蛍光フィルタブロックは、蛍光キューブ、蛍光用ミラーユニット、蛍光フィルターセットと呼ぶ場合がある。 FIG. 3 is a schematic diagram showing an example of a filter block. The filter block 15 includes a first filter (first wavelength selection unit) 17 on which light from the light source device 2 enters, a dichroic mirror 18 on which light through the first filter 17 enters, and light from the dichroic mirror 18 An incident second filter (second wavelength selection unit) 19 is provided. As an example, the filter block 15 is a fluorescent filter block in which an excitation filter, a dichroic mirror, and an absorption filter are integrally formed. The fluorescent filter block may be called a fluorescent cube, a fluorescent mirror unit, or a fluorescent filter set.
 なお、照射光学系6は、フィルタブロック15とは別の第2フィルタブロック(図示省略)を備えてもよい。一例として、第2フィルタブロックは、励起フィルタと、ダイクロイックミラーと、吸収フィルタとが一体で構成される蛍光フィルタブロックである。例えば、第2フィルタブロックは、光源装置2が第3の励起光(第3の波長帯域の光)及び第4の励起光(第4の波長帯域の光)を照射する場合に使用することができる。第2フィルタブロックのダイクロイックミラーは、少なくとも参照光λに対してフィルタブロック15と同様の分光感度特性を備えている。一例として、第2フィルタブロックのダイクロイックミラーは、少なくとも参照光及び反射光の波長λの光に対しては所定の透過率(例えば35%から65%の間)を備えている。フィルタブロック15と第2フィルタブロックとは、例えばターレットのような切替部により切り替えられる。この切替部を用いて、光源装置2を放出した光が入射する位置(光源装置2と第1の対物レンズ16との間の光路)に、フィルタブロック15と第2フィルタブロックの一方を配置することができる。 The irradiation optical system 6 may include a second filter block (not shown) different from the filter block 15. As an example, the second filter block is a fluorescent filter block in which an excitation filter, a dichroic mirror, and an absorption filter are integrally formed. For example, the second filter block is used when the light source device 2 emits the third excitation light (light in the third wavelength band) and the fourth excitation light (light in the fourth wavelength band). it can. The dichroic mirror of the second filter block has a spectral sensitivity characteristics similar to the filter block 15 to at least the reference light lambda R. As an example, a dichroic mirror of the second filter block has a predetermined transmittance (e.g. between 35% and 65%) for light having a wavelength lambda R of at least the reference light and the reflected light. The filter block 15 and the second filter block are switched by a switching unit such as a turret. Using this switching unit, one of the filter block 15 and the second filter block is disposed at a position where the light emitted from the light source device 2 enters (an optical path between the light source device 2 and the first objective lens 16). be able to.
 第1フィルタ17は、波長選択光学素子である。一例として、第1フィルタ17は、光源装置2からの光のうち、一部の波長領域の光をカットして、第1の励起光、第2の励起光及び参照光を抽出する波長選択光学素子である。一例として、第1フィルタ17は、波長λ1を含む波長帯域、波長λ2を含む波長帯域、及び波長λを含む波長帯域とが透過率75%から100%となる光学特性を備えている。第1フィルタ17を透過した所定の波長領域の光(第1の励起光、第2の励起光及び参照光)は、光学素子であるダイクロイックミラー18に入射する。 The first filter 17 is a wavelength selection optical element. As an example, the first filter 17 cuts a part of the wavelength region of the light from the light source device 2 and extracts the first excitation light, the second excitation light, and the reference light. It is an element. As an example, the first filter 17, the wavelength band including the wavelength .lambda.1, has a wavelength band including a wavelength .lambda.2, and optical properties and the wavelength band of 100 percent transmittance of 75% including a wavelength lambda R. Light in a predetermined wavelength region (first excitation light, second excitation light, and reference light) that has passed through the first filter 17 is incident on a dichroic mirror 18 that is an optical element.
 ダイクロイックミラー18は、励起光と蛍光とを分離する分離光学素子である。一例として、ダイクロイックミラー18は、第1フィルタ17で選択された励起光を反射し、被照射体1から発せられた蛍光を透過させるミラーである。ダイクロイックミラー18は、例えば光軸に対して45度傾けて配置されている。 The dichroic mirror 18 is a separation optical element that separates excitation light and fluorescence. As an example, the dichroic mirror 18 is a mirror that reflects the excitation light selected by the first filter 17 and transmits the fluorescence emitted from the irradiated object 1. The dichroic mirror 18 is disposed, for example, inclined by 45 degrees with respect to the optical axis.
 一例として、ダイクロイックミラー18は、励起光を反射し、蛍光を透過し、参照光を部分反射し、反射光を部分透過する。ダイクロイックミラー18は、第1の励起光及び第2の励起光を反射し、第1の蛍光及び第2の蛍光を透過し、参照光を部分反射し、反射光を部分透過する。一例として、ダイクロイックミラー18は、第1の励起光の波長λ1を含む波長帯域の光及び第2の励起光の波長λ2を含む波長帯域の光を反射し、第1の蛍光の波長λ1´を含む波長帯域の光及び第2の蛍光の波長λ2´を含む波長帯域の光を透過し、参照光及び反射光の波長λを含む波長帯域の光を部分反射及び部分透過(例えば35%から65%の間の透過率)する。第1の励起光、第2の励起光及び参照光は、ダイクロイックミラー18で反射して、第1の対物レンズ16に導かれる。 As an example, the dichroic mirror 18 reflects excitation light, transmits fluorescence, partially reflects reference light, and partially transmits reflected light. The dichroic mirror 18 reflects the first excitation light and the second excitation light, transmits the first fluorescence and the second fluorescence, partially reflects the reference light, and partially transmits the reflected light. As an example, the dichroic mirror 18 reflects light in a wavelength band including the wavelength λ1 of the first excitation light and light in a wavelength band including the wavelength λ2 of the second excitation light, and changes the wavelength λ1 ′ of the first fluorescence. It transmits light in a wavelength band including a wavelength λ2' light and the second fluorescence wavelength band including, from the light of the wavelength band including the wavelength lambda R of the reference light and the reflected light portion reflected and partially transmitted (e.g. 35% Transmittance between 65%). The first excitation light, the second excitation light, and the reference light are reflected by the dichroic mirror 18 and guided to the first objective lens 16.
 第1の対物レンズ16は、無限系の対物レンズであり、ステージ4に支持されている被照射体1の表面と対向可能である。本実施形態においては、第1の対物レンズ16は、被照射体1の+Z側(上方)に配置されている。第1の励起光、第2の励起光、及び参照光は、第1の対物レンズ16を介して被照射体1に導かれる。第1の励起光、第2の励起光、及び参照光によって、被照射体1が照明される。 The first objective lens 16 is an infinite objective lens and can face the surface of the irradiated object 1 supported by the stage 4. In the present embodiment, the first objective lens 16 is disposed on the + Z side (upward) of the irradiated object 1. The first excitation light, the second excitation light, and the reference light are guided to the irradiation object 1 through the first objective lens 16. The irradiated object 1 is illuminated by the first excitation light, the second excitation light, and the reference light.
 被照射体1からの第1の蛍光、第2の蛍光、及び反射光は、第2フィルタ19に入射する。第2フィルタ19は、波長選択光学素子である。一例として、第2フィルタ19は、被照射体1から発せられた光のうち、特定の波長帯域の光を透過させ、特定の波長帯域以外の光を例えば反射又は吸収によって遮断する。第2フィルタ19は、第1の蛍光、第2の蛍光、及び反射光を選択的に透過する。一例として、第2フィルタ19は、吸収フィルタである。吸収フィルタは、エミッションフィルタ又はバリアフィルタと呼ぶ場合がある。 The first fluorescence, the second fluorescence, and the reflected light from the irradiated body 1 enter the second filter 19. The second filter 19 is a wavelength selection optical element. As an example, the 2nd filter 19 permeate | transmits the light of a specific wavelength band among the lights emitted from the to-be-irradiated body 1, and interrupts | blocks light other than a specific wavelength band by reflection or absorption, for example. The second filter 19 selectively transmits the first fluorescence, the second fluorescence, and the reflected light. As an example, the second filter 19 is an absorption filter. The absorption filter may be called an emission filter or a barrier filter.
 第2フィルタ19を透過した光(第1の蛍光、第2の蛍光、及び反射光)は、結像光学系7に入射する。結像光学系7は、被照射体1からの光束(第1の蛍光、第2の蛍光、及び反射光)をその焦点面(結像面)近傍に結像する。一例として、結像光学系7は、第2の対物レンズ21を備えている。一例として、第2の対物レンズ21の焦点面近傍に、マイクロレンズアレイ8とセンサ5とが当該順に配置される。 The light (first fluorescence, second fluorescence, and reflected light) that has passed through the second filter 19 enters the imaging optical system 7. The imaging optical system 7 forms an image of the light beam (first fluorescence, second fluorescence, and reflected light) from the irradiated object 1 in the vicinity of its focal plane (imaging plane). As an example, the imaging optical system 7 includes a second objective lens 21. As an example, the microlens array 8 and the sensor 5 are arranged in that order near the focal plane of the second objective lens 21.
 マイクロレンズアレイ8は、二次元状に配列された複数のマイクロレンズを備えている。一例として、マイクロレンズアレイ8は、第2の対物レンズ21の結像面に配置されている。なお、マイクロレンズアレイ8は、第2の対物レンズ21の瞳面に配置されてもよい。マイクロレンズアレイ8の縦方向のレンズの数及び横方向のレンズの数(配置密度)は、この測定装置10で取得される画像に必要な分解能に応じて適宜設定される。 The microlens array 8 includes a plurality of microlenses arranged two-dimensionally. As an example, the microlens array 8 is disposed on the imaging plane of the second objective lens 21. Note that the microlens array 8 may be disposed on the pupil plane of the second objective lens 21. The number of lenses in the vertical direction and the number of lenses in the horizontal direction (arrangement density) of the microlens array 8 are appropriately set according to the resolution required for the image acquired by the measurement apparatus 10.
 センサ5は、受光素子(フォトダイオード)が二次元上に配列された受光部である。一例として、センサ5は、撮像素子である。一例として、撮像素子としては、CCD(charge coupled device)、CMOS(Complementary Metal Oxide Semiconductor)が挙げられる。センサ5は、フォトセンサアレイ、イメージセンサ、エリアセンサと呼ばれる場合もある。センサ5は、マイクロレンズアレイ8を介して被照射体1からの光を受光する。センサ5は、受光した光量に応じた信号を制御装置30に出力する。 The sensor 5 is a light receiving unit in which light receiving elements (photodiodes) are arranged two-dimensionally. As an example, the sensor 5 is an image sensor. As an example, the image pickup device includes a charge coupled device (CCD) and a complementary metal oxide semiconductor (CMOS). The sensor 5 may be called a photo sensor array, an image sensor, or an area sensor. The sensor 5 receives light from the irradiated object 1 via the microlens array 8. The sensor 5 outputs a signal corresponding to the received light amount to the control device 30.
 図4は、マイクロレンズアレイ8と、受光素子が二次元に配列されたセンサ5との配置を示す模式図である。センサ5は、各マイクロレンズ81を通過した光を受光する画素配列(すなわち、マイクロレンズ81に対応した配置パターン)を備えている。ここでは、マイクロレンズ81を通過した光を受光するセンサ5の受光素子から得られる信号を「画素信号」と呼ぶ。 FIG. 4 is a schematic diagram showing the arrangement of the microlens array 8 and the sensor 5 in which the light receiving elements are two-dimensionally arranged. The sensor 5 includes a pixel array that receives light that has passed through each microlens 81 (that is, an arrangement pattern corresponding to the microlens 81). Here, a signal obtained from the light receiving element of the sensor 5 that receives light that has passed through the microlens 81 is referred to as a “pixel signal”.
 制御装置30は、センサ5で取得した画素信号の集合に対して所定の信号処理を実行する。信号処理の内容については後述する。制御装置30は、信号処理の結果を、表示装置40を用いて表示する。一例として、表示装置40は、センサ5で取得した被照射体1の像情報を表示することができる。 The control device 30 performs predetermined signal processing on the set of pixel signals acquired by the sensor 5. The contents of the signal processing will be described later. The control device 30 displays the signal processing result using the display device 40. As an example, the display device 40 can display the image information of the irradiated object 1 acquired by the sensor 5.
 マイクロレンズアレイ8とセンサ5とを用いる構成では、各蛍光ごとにAF機能による焦点合わせを実行せずに、複数の異なる焦点位置での画素信号の集合を得ることができる。一例として、制御装置30は、センサ5で取得した画素信号の集合に対して所定の信号処理を実行し、焦点位置の異なる複数の画像を生成することができる。図5は、マイクロレンズアレイ8とセンサ5とを用いて焦点位置の異なる複数の画像を取得した一例である。図5の縦軸は焦点位置Zを示す。一例として、制御装置30は、共通する(すなわち、同一の)焦点距離の画素信号の信号列を抽出して、その焦点距離での画像を生成することができる。図5では、各焦点距離Z~Zでの画素信号から生成された複数の画像が示されている。 In the configuration using the microlens array 8 and the sensor 5, a set of pixel signals at a plurality of different focal positions can be obtained without performing focusing by the AF function for each fluorescence. As an example, the control device 30 can perform predetermined signal processing on a set of pixel signals acquired by the sensor 5 to generate a plurality of images having different focal positions. FIG. 5 is an example in which a plurality of images having different focal positions are acquired using the microlens array 8 and the sensor 5. The vertical axis in FIG. As an example, the control device 30 can extract a signal sequence of pixel signals having a common (that is, the same) focal length, and generate an image at the focal length. FIG. 5 shows a plurality of images generated from pixel signals at the respective focal lengths Z 1 to Z 6 .
 図6及び図7は、マイクロレンズアレイ8と、受光素子が二次元に配列されたセンサ5とを用いた構成での画素信号の抽出処理を説明する図である。ここでは、マイクロレンズアレイ8が、第2の対物レンズ21の結像面(Z=0)に配置されている場合について説明する。 6 and 7 are diagrams for explaining pixel signal extraction processing in a configuration using the microlens array 8 and the sensor 5 in which the light receiving elements are two-dimensionally arranged. Here, a case where the microlens array 8 is arranged on the imaging plane (Z = 0) of the second objective lens 21 will be described.
 図6では、説明を分かり易くするために、センサ5において直線上に並んだ5つの画素(a,b,c,d,e)に入射する各光線(対応するマイクロレンズの中心を通る主光線のみ)が示されている。また、各図中の各要素には、光軸と垂直な面内における座標を示すための添字(1,2,3,・・・)を付している。 In FIG. 6, in order to make the explanation easy to understand, each light ray incident on five pixels (a, b, c, d, e) arranged in a straight line in the sensor 5 (a principal ray passing through the center of the corresponding microlens). Only) is shown. Further, each element in each drawing is given a suffix (1, 2, 3,...) For indicating coordinates in a plane perpendicular to the optical axis.
 図6は、図示省略の第2の対物レンズ21による被照射体1の像が、マイクロレンズアレイ8上(Z=0)に入射する領域を破線で示し、その中心の座標をX1,・・・,X7で示す。この場合、第2の対物レンズ21からの光線r1,r2,r3,r4,r5は、結像面(Z=0面上)、すなわちマイクロレンズの表面の1点に集まる。 FIG. 6 shows a region where an image of the irradiated object 1 by the second objective lens 21 (not shown) is incident on the microlens array 8 (Z = 0) by a broken line, and the coordinates of the center are X1,. ., Indicated by X7 In this case, the light rays r1, r2, r3, r4, r5 from the second objective lens 21 gather at one point on the imaging plane (on the Z = 0 plane), that is, the surface of the microlens.
 中心の座標X1,・・・,X7で示した各領域は、同じ作用を受けるから、以下では、中央の領域について説明する。座標X4を出た光線は、マイクロレンズMLのレンズ作用によって、センサ5の面上のそれぞれの画素に対応して、a4,b4,c4,d4,e4の画素信号となる。したがって、結像面(Z=0)での各画像信号は、これらの和となる。すなわち、画像信号をLとすれば、Lは下記の式(1)で求めることができる。 Since the regions indicated by the center coordinates X1,..., X7 are subjected to the same action, the center region will be described below. Rays emitted coordinates X4 is the lens action of the microlens ML 4, in correspondence with each of pixels on the face of the sensor 5, the a4, b4, c4, d4, e4 pixel signals. Therefore, each image signal on the imaging plane (Z = 0) is the sum of these. That is, if the image signal is L, L can be obtained by the following equation (1).
 L(i)=(a+b+c+d+e)   ・・・(1) L (i) = (a i + b i + c i + d i + e i ) (1)
 ただし、上記の通り、iは光軸と垂直な面内におけるX軸方向の座標(i=1~7)を示しており、例えば、座標X4の場合には、L(4)=(a4+b4+c4+d4+e4)となる。なお、実際には、2次元的に考える必要があるので、X軸方向に加えてY軸方向についても考える必要がある。ここでは、説明を簡略化するためにX軸方向についてのみ述べるが、Y軸方向も同様の方法で画素信号を抽出すればよい。 However, as described above, i indicates coordinates in the X-axis direction (i = 1 to 7) in a plane perpendicular to the optical axis. For example, in the case of coordinate X4, L (4) = (a4 + b4 + c4 + d4 + e4) It becomes. Actually, since it is necessary to think two-dimensionally, it is necessary to consider the Y-axis direction in addition to the X-axis direction. Here, in order to simplify the description, only the X-axis direction will be described, but pixel signals may be extracted in the same manner in the Y-axis direction.
 一方、図7は、結像面が各マイクロレンズの表面からずれたZ=h1の位置にある場合の処理を説明する図である。中央の領域の座標X´4の画像信号は、下記の式(2)から求めることができる。 On the other hand, FIG. 7 is a diagram for explaining processing when the imaging plane is at a position of Z = h1 deviated from the surface of each microlens. The image signal of the coordinate X′4 in the center area can be obtained from the following equation (2).
 L(i)=(ai-2+bi-1+c+di+1+ei+2)   ・・・(2) L (i) = (a i−2 + b i−1 + c i + d i + 1 + e i + 2 ) (2)
 ただし、i=1~7であり、座標の添数字が1~7から外れた場合、その値は使用されない。座標X´4の場合にはi=4となる。したがって、図7に示すように、L(4)=(a2+b3+c4+d5+e6)となる。このように、物体面の特定の焦点位置での画素信号の信号列は、センサ5の撮像面のそれぞれの領域での特定の画素信号だけを集めることによって生成される。 (However, if i = 1 to 7 and the coordinate index is outside 1 to 7, the value is not used.) In the case of the coordinate X′4, i = 4. Therefore, as shown in FIG. 7, L (4) = (a2 + b3 + c4 + d5 + e6). In this way, the signal sequence of pixel signals at a specific focal position on the object plane is generated by collecting only specific pixel signals in the respective areas of the imaging plane of the sensor 5.
 次に、マイクロレンズアレイ8とセンサ5とを用いる構成において、複数の異なる焦点位置における画素信号の集合から、第1の蛍光の合焦点に対応する画素信号の信号列、及び第2の蛍光の合焦点に対応する画素信号の信号列を取得する方法を説明する。 Next, in the configuration using the microlens array 8 and the sensor 5, the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence, and the second fluorescence from the set of pixel signals at a plurality of different focal positions. A method of acquiring a signal sequence of pixel signals corresponding to the focal point will be described.
 図8は、各蛍光の色収差量を説明する図である。上述のように、第1の蛍光(第1の色の光)の波長をλ1´とし、第2の蛍光(第2の色の光)の波長をλ2´とする。第1の蛍光の色収差量と第2の蛍光の色収差量は、それぞれ異なる。例えば、波長λの反射光の像の合焦点位置をZとする。この場合、第1の蛍光の合焦点は、ZからΔZだけずれる。第2の蛍光の合焦点は、ZからΔZだけずれる。したがって、波長λの反射光の結像位置であるZが分かれば、第1の蛍光の合焦点は、Z+ΔZで求めることができる。また、第2の蛍光の合焦点は、Z+ΔZで求めることができる。 FIG. 8 is a diagram for explaining the amount of chromatic aberration of each fluorescence. As described above, the wavelength of the first fluorescence (first color light) is λ1 ′, and the wavelength of the second fluorescence (second color light) is λ2 ′. The amount of chromatic aberration of the first fluorescence is different from the amount of chromatic aberration of the second fluorescence. For example, the focused position of the image of the reflected light of the wavelength lambda R and Z 0. In this case, the focal point of the first fluorescence is shifted from Z 0 by ΔZ 1 . The focal point of the second fluorescence is shifted from Z 0 by ΔZ 2 . Thus, knowing the Z 0 is the imaging position of the reflected light of the wavelength lambda R, focus of the first fluorescence, can be obtained by Z 0 + [Delta] Z 1. Further, the focal point of the second fluorescence can be obtained by Z 0 + ΔZ 2 .
 一例として、マイクロレンズアレイ8とセンサ5とを用いて、複数の異なる焦点位置における画素信号の集合を取得した後に、以下の処理を実行できる。制御装置30は、複数の異なる焦点位置における画素信号の集合から、焦点位置Z+ΔZの光が入射された受光素子からの画素信号を抽出する。ここで抽出された画素信号の信号列は、実質的に第1の蛍光の合焦点での蛍光像を表す信号列に対応する。また、制御装置30は、複数の異なる焦点位置における画素信号の集合から、焦点位置Z+ΔZの光が入射された受光素子からの画素信号を抽出する。ここで抽出された画素信号の信号列は、実質的に第2の蛍光の合焦点での蛍光像を表す信号列に対応する。 As an example, the following processing can be executed after acquiring a set of pixel signals at a plurality of different focal positions using the microlens array 8 and the sensor 5. The control device 30 extracts the pixel signal from the light receiving element on which the light at the focal position Z 0 + ΔZ 1 is incident, from a set of pixel signals at a plurality of different focal positions. The signal sequence of the pixel signals extracted here substantially corresponds to a signal sequence that represents a fluorescence image at the focal point of the first fluorescence. Further, the control device 30 extracts a pixel signal from the light receiving element on which the light at the focal position Z 0 + ΔZ 2 is incident, from a set of pixel signals at a plurality of different focal positions. The signal sequence of the pixel signals extracted here substantially corresponds to a signal sequence that represents a fluorescence image at the focal point of the second fluorescence.
 反射光の結像位置(合焦点)Zを基準として、第1の蛍光の合焦点と第2の蛍光の合焦点を求める利点を説明する。蛍光画像は、対象となる検体によっては暗い又は低コントラストの場合がある。このような状況では、第1の蛍光の合焦点の画素信号の信号列及び第2の蛍光の合焦点の画素信号の信号列を取得するのは困難な場合がある。したがって、十分なコントラストが得られる反射光の画像からZを求め、Zを基準とした方が好ましい。 The imaging position of the reflected light (focus point) Z 0 as a reference, explain the advantages of obtaining the focus and the focal point of the second fluorescence of the first fluorescent. The fluorescent image may be dark or low in contrast depending on the target specimen. In such a situation, it may be difficult to obtain the signal sequence of the first fluorescent focused pixel signal and the signal sequence of the second fluorescent focused pixel signal. Therefore, it is preferable to obtain Z 0 from an image of reflected light from which sufficient contrast is obtained, and use Z 0 as a reference.
 図9は、上述した処理を実現する制御装置30の構成要素を説明する図である。制御装置30は、信号列抽出部31と、信号列処理部32と、ステージ制御部33と、色収差情報格納部34とを備える。一例として、信号列抽出部31、信号列処理部32、及びステージ制御部33の処理は、それらの機能を実現するソフトウェアのプログラムコードによって実現できる。一例として、制御装置30のプロセッサは、メモリに格納されている所定のプログラムの指示にしたがって、以下で説明する処理を実行する。なお、信号列抽出部31、信号列処理部32、及びステージ制御部33の一部の処理が、例えば集積回路等の電子部品を用いたハードウェアにより実現されてもよい。色収差情報格納部34は、制御装置30の記憶装置で実現されてよい。 FIG. 9 is a diagram for explaining the components of the control device 30 that realizes the above-described processing. The control device 30 includes a signal sequence extraction unit 31, a signal sequence processing unit 32, a stage control unit 33, and a chromatic aberration information storage unit 34. As an example, the processing of the signal sequence extraction unit 31, the signal sequence processing unit 32, and the stage control unit 33 can be realized by a program code of software that realizes these functions. As an example, the processor of the control device 30 executes processing described below in accordance with an instruction of a predetermined program stored in the memory. Note that some processes of the signal sequence extraction unit 31, the signal sequence processing unit 32, and the stage control unit 33 may be realized by hardware using electronic components such as an integrated circuit. The chromatic aberration information storage unit 34 may be realized by a storage device of the control device 30.
 信号列抽出部31は、マイクロレンズアレイ8を通過した光を受光するセンサ5の各素子から画素信号を受け取る。信号列抽出部31は、複数の異なる焦点における反射光の画素信号の集合から、反射光の参照焦点位置Zを求める。上記の説明では、Zを反射光の像の合焦点位置として説明したが、ここでの参照焦点位置Zは、例えばコントラストが最大となる焦点位置であり、合焦点位置に厳密に一致しない場合もある。ここで求められる参照焦点位置を、合焦点位置とみなす。合焦点位置に厳密に一致しない場合、一例として、コントラストの最大値に一番近い位置をZとする。上述したように、反射光の像はコントラストが十分であるため、コントラストが最大となる焦点位置は、ほぼ合焦点位置であると言える。この参照焦点位置を基準とすることにより、第1の蛍光の合焦点位置及び第2の蛍光の合焦点位置を求めることが可能である。 The signal string extraction unit 31 receives pixel signals from each element of the sensor 5 that receives light that has passed through the microlens array 8. Signal sequence extraction unit 31, from a set of pixel signals of the reflected light at a plurality of different focus, determine the reference focus position Z 0 of the reflected light. In the above description has been described Z 0 as the focal point position of the image of the reflected light, reference focus position Z 0 here, for example, contrast the focal position where the maximum, not exactly coincide with the focus position In some cases. The reference focal position obtained here is regarded as the in-focus position. If not exactly match the focused position, as an example, the position closest to the maximum value of the contrast and Z 0. As described above, since the contrast of the reflected light image is sufficient, it can be said that the focal position where the contrast is maximized is almost the focal position. By using this reference focal position as a reference, it is possible to obtain the in-focus position of the first fluorescence and the in-focus position of the second fluorescence.
 信号列抽出部31は、複数の異なる焦点における第1の画素信号の集合から、第1の蛍光の合焦点に対応する画素信号の信号列を抽出する。ここで、第1の画素信号の集合とは、第1の蛍光が出ているであろう時間帯の間に取得された画素信号の集合である。以下で、「第1の画素信号の集合」と表記した場合、これと同様の内容を意味する。一例として、信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、第1の蛍光の合焦点に対応する画素信号の信号列を抽出する。ここで抽出される第1の蛍光の画素信号の信号列は、実質的に第1の蛍光の合焦点での蛍光像を表す信号列に対応する。 The signal sequence extraction unit 31 extracts a signal sequence of pixel signals corresponding to the focal point of the first fluorescence from a set of first pixel signals at a plurality of different focal points. Here, the first set of pixel signals is a set of pixel signals acquired during a time period in which the first fluorescence will be emitted. Hereinafter, the expression “first set of pixel signals” means the same contents. As an example, the signal sequence extraction unit 31 extracts the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence using the chromatic aberration amount ΔZ 1 of the first fluorescence from the reference focal position Z 0 . The signal sequence of the first fluorescence pixel signal extracted here substantially corresponds to a signal sequence representing a fluorescence image at the focal point of the first fluorescence.
 信号列抽出部31は、複数の異なる焦点における第2の画素信号の集合から、第2の蛍光の合焦点に対応する画素信号の信号列を抽出する。ここで、第2の画素信号の集合とは、第2の蛍光が出ているであろう時間帯の間に取得された画素信号の集合である。以下で、「第2の画素信号の集合」と表記した場合、これと同様の内容を意味する。一例として、信号列抽出部31は、参照焦点位置Zからの第2の蛍光の色収差量ΔZを用いて、第2の蛍光の合焦点に対応する画素信号の信号列を抽出する。ここで抽出される第2の蛍光の画素信号の信号列は、実質的に第2の蛍光の合焦点での蛍光像を表す信号列に対応する。 The signal sequence extraction unit 31 extracts a signal sequence of pixel signals corresponding to the focal point of the second fluorescence from a set of second pixel signals at a plurality of different focal points. Here, the second set of pixel signals is a set of pixel signals acquired during a time period in which the second fluorescence will be emitted. Hereinafter, the expression “second set of pixel signals” means the same content. As an example, the signal string extracting unit 31, using the chromatic aberration amount [Delta] Z 2 of the second fluorescence from the reference focus position Z 0, and extracts the signal sequence of the pixel signals corresponding to the focal point of the second fluorescence. The signal sequence of the second fluorescence pixel signal extracted here substantially corresponds to a signal sequence representing a fluorescence image at the focal point of the second fluorescence.
 信号列処理部32は、第1の蛍光の合焦点に対応する画素信号の信号列に対して第1の信号処理を実行し、第2の蛍光の合焦点に対応する画素信号の信号列に対して第2の信号処理を実行する。一例として、第1の信号処理は、第1の蛍光の合焦点に対応する画素信号の信号列から第1の画像を生成する処理である。一例として、第2の信号処理は、第2の蛍光の合焦点に対応する画素信号の信号列から第2の画像を生成する処理である。制御装置30は、第1の画像及び第2の画像を表示装置40に出力してもよい。 The signal sequence processing unit 32 performs the first signal processing on the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence, and the signal sequence of the pixel signal corresponding to the focal point of the second fluorescence. On the other hand, the second signal processing is executed. As an example, the first signal processing is processing for generating a first image from a signal sequence of pixel signals corresponding to the focal point of the first fluorescence. As an example, the second signal processing is processing for generating a second image from a signal sequence of pixel signals corresponding to the focal point of the second fluorescence. The control device 30 may output the first image and the second image to the display device 40.
 他の例として、第1の信号処理は、第1の蛍光の画素信号の信号値を表す情報と、その第1の蛍光の画素信号に対応する被照射体1の位置情報とを出力する処理でもよい。一例として、画素信号の信号値を表す情報は、輝度値である。一例として、被照射体1の位置情報は、スポットのアドレスである。一例として、信号列処理部32は、所定のしきい値を超える輝度値の画素信号を抽出する。輝度値が所定の閾値を超えた場合、第1の蛍光が発生したと考えることができる。信号列処理部32は、その輝度値と、その輝度値に対応するスポットの被照射体1におけるアドレスとを対応付ける。信号列処理部32は、その対応付けた結果を表示装置40に出力してもよい。これにより、ユーザが、どの位置の被照射体1で蛍光を発しているかを判断することができる。同様に、第2の信号処理は、第2の蛍光の画素信号の信号値を表す情報と、その第2の蛍光の画素信号に対応する被照射体1の位置情報とを出力する処理でもよい。 As another example, the first signal processing is a process of outputting information representing the signal value of the first fluorescence pixel signal and position information of the irradiated object 1 corresponding to the first fluorescence pixel signal. But you can. As an example, the information indicating the signal value of the pixel signal is a luminance value. As an example, the position information of the irradiated object 1 is a spot address. As an example, the signal sequence processing unit 32 extracts a pixel signal having a luminance value exceeding a predetermined threshold value. When the luminance value exceeds a predetermined threshold value, it can be considered that the first fluorescence is generated. The signal sequence processing unit 32 associates the luminance value with the address of the spot corresponding to the luminance value in the irradiated object 1. The signal sequence processing unit 32 may output the associated result to the display device 40. Thereby, it can be determined in which position the irradiated body 1 emits fluorescence. Similarly, the second signal processing may be processing for outputting information representing the signal value of the second fluorescence pixel signal and position information of the irradiated object 1 corresponding to the second fluorescence pixel signal. .
 ステージ制御部33は、反射光の画素信号の集合のうち、任意の画素信号の信号値が所定のしきい値よりも大きいかを判定する。一例として、判定に用いる信号値は、輝度値である。一例として、上記しきい値は、焦点深度又はノイズレベルに基づいて予め設定してもよい。別の例として、ステージ制御部33は、反射光の画素信号の集合からコントラストを算出し、そのコントラストの値が所定のしきい値よりも大きいかを判定してもよい。なお、第1の画素信号の集合、及び第2の画素信号の集合の画素信号を用いて上記の判定を行ってもよい。
 ここで説明される測定装置10の利点の1つは、同じ被照射体1に対して測定を行う場合、第1の励起光を照射して画素信号を取得するときと、第2の励起光を照射して画素信号を取得するときとの間で、測定装置10の光学系(例えば焦点調節用のレンズ(フォーカシングレンズ))を焦点調節のために駆動する必要がない。測定装置10の利点の1つは、同じ被照射体1に対して測定を行う場合、第1の励起光を照射して画素信号を取得するときと、第2の励起光を照射して画素信号を取得するときとの間で、ステージ4を焦点調節のために駆動する必要はない。なお、焦点位置が大きくはずれている場合、ステージ4の高さを一度調整すれば(この場合、必ずしも焦点を合わせなくてもよい)、第1の励起光を照射して画素信号を取得するときと、第2の励起光を照射して画素信号を取得するときとの間で、光学系のレンズ又はステージ4を焦点調節のために駆動する必要がない。
The stage control unit 33 determines whether the signal value of any pixel signal in the set of reflected pixel signals is greater than a predetermined threshold value. As an example, the signal value used for determination is a luminance value. As an example, the threshold value may be preset based on the depth of focus or the noise level. As another example, the stage control unit 33 may calculate contrast from a set of pixel signals of reflected light and determine whether the contrast value is greater than a predetermined threshold value. Note that the above determination may be performed using the pixel signals of the first pixel signal set and the second pixel signal set.
One of the advantages of the measurement apparatus 10 described here is that when performing measurement on the same irradiated object 1, when the pixel signal is acquired by irradiating the first excitation light, the second excitation light is obtained. It is not necessary to drive the optical system of the measuring apparatus 10 (for example, a lens for focusing (focusing lens)) for focusing between the time when the pixel signal is acquired by irradiating the lens. One of the advantages of the measuring apparatus 10 is that when performing measurement on the same irradiated object 1, a pixel signal is obtained by irradiating the first excitation light, and a pixel by irradiating the second excitation light. It is not necessary to drive the stage 4 for focusing between the time when the signal is acquired. When the focal position is greatly deviated, the height of the stage 4 is adjusted once (in this case, it is not always necessary to focus), and when the pixel signal is acquired by irradiating the first excitation light There is no need to drive the lens or stage 4 of the optical system for focus adjustment between the time when the pixel signal is acquired by irradiating the second excitation light.
 なお、焦点位置が大きくはずれている場合、反射光、第1の蛍光、及び第2の蛍光のいずれの画素信号に関しても、あるしきい値よりも大きい輝度値(又はコントラスト)が得られない可能性がある。一例として、ステージ制御部33は、反射光、第1の蛍光、及び第2の蛍光の画素信号の集合から任意の画素信号を抽出し、それらの画素信号の中に、所定のしきい値よりも大きい輝度値の信号があるかを判定してもよい。ステージ制御部33は、所定のしきい値よりも大きい輝度値の信号がない場合、ステージ4の高さ方向(Z方向)を変更又は調整してよい。一例として、ステージ制御部33は、反射光の画素信号の集合から所定の数の画素信号を抽出し、それらの画素信号の中に、所定のしきい値よりも大きい輝度値が存在するかを判定してもよい。十分なコントラストが得られる可能性が高い反射光の画素信号の集合を優先的に扱うことで、焦点位置が大きくはずれているかを判定することができる。 When the focal position is greatly deviated, it is possible that a luminance value (or contrast) larger than a certain threshold value cannot be obtained with respect to any of the pixel signals of reflected light, first fluorescence, and second fluorescence. There is sex. As an example, the stage control unit 33 extracts an arbitrary pixel signal from a set of pixel signals of reflected light, first fluorescence, and second fluorescence, and a predetermined threshold value is included in these pixel signals. It may be determined whether there is a signal having a larger luminance value. The stage control unit 33 may change or adjust the height direction (Z direction) of the stage 4 when there is no signal having a luminance value larger than a predetermined threshold value. As an example, the stage control unit 33 extracts a predetermined number of pixel signals from the set of reflected pixel signals, and determines whether or not a luminance value larger than a predetermined threshold exists in the pixel signals. You may judge. By preferentially handling a set of pixel signals of reflected light that is highly likely to obtain sufficient contrast, it is possible to determine whether the focal position is greatly deviated.
 色収差情報格納部34は、各蛍光の色収差量に関する情報(以下、「色収差情報」と称する)を含む。これにより、各蛍光の色収差量に応じて、各蛍光の合焦点位置での信号列を取得することが可能となる。一例として、色収差情報は、反射光の合焦点位置からの第1の蛍光の色収差量ΔZと、反射光の合焦点位置からの第2の蛍光の色収差量ΔZとを少なくとも含む。一例として、色収差情報は、励起光の波長の情報と、その励起光から発生する蛍光の色収差量との情報とが関連付けられた情報である。これにより、制御装置30は、光源装置2を制御して任意の励起光を放射したときに、その励起光に対応する蛍光の色収差量の情報を取得することができる。一例として、色収差情報は、第1の蛍光と第2の蛍光のとの間の色収差量の関係を表す情報を含んでもよい。この関係を用いて、制御装置30は、第1の蛍光の合焦点位置と第2の蛍光の合焦点位置との関係を求めてもよい。これにより、反射光の参照点位置を求めることなく、第1の蛍光の合焦点位置と第2の蛍光の合焦点位置の一方がわかれば、他方の合焦点位置を求めることができる。 The chromatic aberration information storage unit 34 includes information on the amount of chromatic aberration of each fluorescence (hereinafter referred to as “chromatic aberration information”). Accordingly, it is possible to acquire a signal sequence at the focal point position of each fluorescence in accordance with the amount of chromatic aberration of each fluorescence. As an example, the chromatic aberration information includes at least a chromatic aberration amount ΔZ 1 of the first fluorescence from the focal position of the reflected light and a chromatic aberration amount ΔZ 2 of the second fluorescence from the focal position of the reflected light. As an example, the chromatic aberration information is information in which information on the wavelength of the excitation light is associated with information on the amount of chromatic aberration of fluorescence generated from the excitation light. Thus, when the control device 30 controls the light source device 2 to emit arbitrary excitation light, the control device 30 can acquire information on the amount of chromatic aberration of fluorescence corresponding to the excitation light. As an example, the chromatic aberration information may include information indicating the relationship of the chromatic aberration amount between the first fluorescence and the second fluorescence. Using this relationship, the control device 30 may obtain the relationship between the in-focus position of the first fluorescence and the in-focus position of the second fluorescence. Thus, if one of the in-focus position of the first fluorescence and the in-focus position of the second fluorescence is known without obtaining the reference point position of the reflected light, the other in-focus position can be obtained.
 なお、色収差情報格納部34の色収差情報は、光学システム(レンズなど)3の設計に基づいて予め設定しておくことができる。別の例として、測定装置10の最初の測定のときに各蛍光の色収差量を測定し、その測定結果を色収差情報として色収差情報格納部34に格納してもよい。 The chromatic aberration information in the chromatic aberration information storage unit 34 can be set in advance based on the design of the optical system (lens or the like) 3. As another example, the chromatic aberration amount of each fluorescence may be measured at the time of the first measurement of the measuring apparatus 10 and the measurement result may be stored in the chromatic aberration information storage unit 34 as chromatic aberration information.
 図10は、測定装置10によって支持部材60上の被照射体1を測定する際のフローチャートの一例である。まず、支持部材60を用意する(1001)。一例として、ステージ4に支持部材60(例えば、プレート)を配置する。次に、多焦点における画素信号の集合を取得する(1002)次に、多焦点における画素信号の集合に対して信号処理を実行する(1003)。次に、支持部材60上の全ての被照射体1の測定が終わったかを判定する(1005)。終了していない場合は、次の被照射体1に移動する(1004)。すなわち、ステージ4を駆動し、測定対象を次の被照射体1に変更する。次の被照射体1に測定対象が変更された後に、ステップ1002に戻る。 FIG. 10 is an example of a flowchart when measuring the irradiated object 1 on the support member 60 by the measuring apparatus 10. First, the support member 60 is prepared (1001). As an example, a support member 60 (for example, a plate) is disposed on the stage 4. Next, a set of pixel signals at multiple focal points is acquired (1002). Next, signal processing is performed on the set of pixel signals at multiple focal points (1003). Next, it is determined whether measurement of all the irradiated objects 1 on the support member 60 is completed (1005). If not completed, move to the next irradiated object 1 (1004). That is, the stage 4 is driven and the measurement object is changed to the next irradiated object 1. After the measurement object is changed to the next irradiated object 1, the process returns to step 1002.
 図11は、図10のフローチャートの詳細な内容の一例である。以下において、制御装置30及び制御装置30の各構成要素が主語となる処理は、プロセッサを主語とした説明としてもよい。 FIG. 11 is an example of detailed contents of the flowchart of FIG. In the following, the processing in which the control device 30 and each component of the control device 30 are the subject may be described with the processor as the subject.
 図11の例では、被照射体1(例えば支持部材60により支持された複数の被照射体1のうちの1つの被照射体)は、第1の励起光、第2の励起光及び参照光の3つの光で照射される。図11の例では、光源装置2は、第1の励起光、参照光及び第2の励起光を、第1の励起光、参照光、第2の励起光の順に射出する。図11の例では、光源装置2が第1の励起光を射出する期間と、光源装置2が参照光を射出する期間と、光源装置2が第2の励起光を射出する期間とは重複しない。よって、被照射体1は、第1の励起光、参照光、第2の励起光の順に照射される。光源装置2が参照光を射出する期間は、光源装置2が第1の励起光を射出する期間と光源装置2が第2の励起光を射出する期間との間にある。このように参照光を第1の励起光及び第2の励起光の間の時間に照射するのは以下の理由からである。蛍光の情報取得時間(被照射体1が発する蛍光に対するセンサ5の電荷蓄積時間)は、反射光の情報取得時間(反射光に対するセンサ5の電荷蓄積時間)より長くなることが考えられる。例えば、参照光、第1の励起光、及び第2の励起光の順で被照射体1を照射した場合、反射光の情報(画素信号の集合)を取得してから第2の蛍光の情報を取得するまでの時間が長くなる。この時間の間に、何らかの事情で被照射体1のZ方向にずれが生じる場合もある。Z方向のずれは、基準となる参照焦点位置Zのずれを生じさせる。基準となる参照焦点位置Zのずれを抑制するために、参照光の照射順序は、第1の励起光及び第2の励起光のそれぞれに近いことが好ましい。したがって、参照光を第1の励起光及び第2の励起光の間の時間に照射するのが好ましい。なお、第1の励起光、第2の励起光及び参照光の射出順序は、これに限定されず、任意の順序としてもよい。 In the example of FIG. 11, the irradiated object 1 (for example, one irradiated object among a plurality of irradiated objects 1 supported by the support member 60) is the first excitation light, the second excitation light, and the reference light. It is irradiated with three lights. In the example of FIG. 11, the light source device 2 emits first excitation light, reference light, and second excitation light in the order of first excitation light, reference light, and second excitation light. In the example of FIG. 11, the period in which the light source device 2 emits the first excitation light, the period in which the light source device 2 emits the reference light, and the period in which the light source device 2 emits the second excitation light do not overlap. . Therefore, the irradiated object 1 is irradiated in the order of the first excitation light, the reference light, and the second excitation light. The period in which the light source device 2 emits the reference light is between the period in which the light source device 2 emits the first excitation light and the period in which the light source device 2 emits the second excitation light. The reason why the reference light is irradiated in this manner between the first excitation light and the second excitation light is as follows. It is conceivable that the fluorescence information acquisition time (the charge accumulation time of the sensor 5 with respect to the fluorescence emitted from the irradiated object 1) is longer than the information acquisition time of the reflected light (the charge accumulation time of the sensor 5 with respect to the reflected light). For example, when the irradiated object 1 is irradiated in the order of the reference light, the first excitation light, and the second excitation light, the second fluorescence information is obtained after obtaining the reflected light information (a set of pixel signals). The time to get is longer. During this time, there is a case where a deviation occurs in the Z direction of the irradiated object 1 for some reason. Z direction deviation causes a deviation of the reference focus position Z 0 as a reference. To suppress the deviation of the reference focus position Z 0 as a reference, the irradiation order of the reference beam is preferably close to each of the first excitation light and second excitation light. Therefore, it is preferable to irradiate the reference light at a time between the first excitation light and the second excitation light. Note that the emission order of the first excitation light, the second excitation light, and the reference light is not limited to this, and may be any order.
 図10と図11の関係を説明する。図10のステップ1002は、ステップ1101~1108から構成される。図10のステップ1003は、ステップ1109~1112から構成される。 The relationship between FIG. 10 and FIG. 11 will be described. Step 1002 in FIG. 10 includes steps 1101 to 1108. Step 1003 in FIG. 10 includes steps 1109 to 1112.
 まず、支持部材60を用意する(1001)。一例として、ステージ4に支持部材60を配置する。次に、光源装置2は、制御装置30からの信号に基づいて第1の励起光を被照射体1に照射する(1101)。信号列抽出部31は、複数の異なる焦点における第1の画素信号の集合をセンサ5から取得する(1102)。ここでは、第1の励起光を被照射体1に照射したことで被照射体1から第1の蛍光が出ているであろう時間帯の間の画素信号の集合を取得する。ここで取得される第1の画素信号の集合は、第1の蛍光の情報を含む場合もあるし、含まない場合もある。第1の画素信号の集合は、電荷蓄積開始指示がなされてから電荷蓄積終了指示がなされるまでの間にセンサ5の各受光素子が電荷を蓄積する(すなわち、電荷蓄積開始指示に応じてセンサ5の各受光素子が電荷蓄積を開始し、電荷蓄積終了指示に応じてセンサ5の各受光素子が電荷蓄積を終了する)蓄積動作と、蓄積動作を行った(蓄積動作後の)センサ5の各受光素子から電荷を読みだす(出力する)読出動作とを含む撮像動作が1回行われることで得られた画素信号の集合である。 First, the support member 60 is prepared (1001). As an example, the support member 60 is disposed on the stage 4. Next, the light source device 2 irradiates the irradiated body 1 with the first excitation light based on the signal from the control device 30 (1101). The signal sequence extraction unit 31 acquires a set of first pixel signals at a plurality of different focal points from the sensor 5 (1102). Here, a set of pixel signals during a time period in which the first fluorescence is emitted from the irradiated body 1 by irradiating the irradiated body 1 with the first excitation light is acquired. The set of first pixel signals acquired here may or may not include the first fluorescence information. In the first set of pixel signals, each light receiving element of the sensor 5 accumulates charge between when the charge accumulation start instruction is given and when the charge accumulation end instruction is given (that is, the sensor according to the charge accumulation start instruction 5 each of the light receiving elements 5 starts charge accumulation, and each light receiving element of the sensor 5 finishes charge accumulation in response to a charge accumulation end instruction), and the sensor 5 that has performed the accumulation operation (after the accumulation operation) This is a set of pixel signals obtained by performing an imaging operation including a reading operation of reading (outputting) charges from each light receiving element once.
 光源装置2は、制御装置30からの信号に基づいて参照光を被照射体1に照射する(1103)。ここで参照光が照射される被照射体1は、ステップ1101で第1の励起光が照射された被照射体1と同じである。制御装置30の信号列抽出部31は、複数の異なる焦点における反射光の画素信号の集合をセンサ5から取得する(1104)。反射光の画素信号の集合は、電荷蓄積開始指示がなされてから電荷蓄積終了指示がなされるまでの間にセンサ5の各受光素子が電荷を蓄積する(すなわち、電荷蓄積開始指示に応じてセンサ5の各受光素子が電荷蓄積を開始し、電荷蓄積終了指示に応じてセンサ5の各受光素子が電荷蓄積を終了する)蓄積動作と、蓄積動作を行った(蓄積動作後の)センサ5の各受光素子から電荷を読みだす(出力する)読出動作とを含む撮像動作が1回行われることで得られた画素信号の集合である。第1の画素信号の集合を得るための撮像動作と反射光の画素信号の集合を得るための撮像動作とは、それぞれ個別に行われる。反射光の画素信号の集合を得るための撮像動作が行われる期間は、第1の画素信号の集合を得るための撮像動作が行われる期間とは重複しない。 The light source device 2 irradiates the irradiated object 1 with reference light based on a signal from the control device 30 (1103). Here, the irradiated object 1 irradiated with the reference light is the same as the irradiated object 1 irradiated with the first excitation light in Step 1101. The signal string extraction unit 31 of the control device 30 acquires a set of pixel signals of reflected light at a plurality of different focal points from the sensor 5 (1104). A set of pixel signals of reflected light is stored in each light receiving element of the sensor 5 from when the charge accumulation start instruction is issued until the charge accumulation end instruction is issued (that is, the sensor is activated according to the charge accumulation start instruction). 5 each of the light receiving elements 5 starts charge accumulation, and each light receiving element of the sensor 5 finishes charge accumulation in response to a charge accumulation end instruction), and the sensor 5 that has performed the accumulation operation (after the accumulation operation) This is a set of pixel signals obtained by performing an imaging operation including a reading operation of reading (outputting) charges from each light receiving element once. The imaging operation for obtaining the first set of pixel signals and the imaging operation for obtaining the set of reflected light pixel signals are performed individually. The period during which the imaging operation for obtaining the set of reflected pixel signals does not overlap with the period during which the imaging operation for obtaining the first set of pixel signals is performed.
 光源装置2は、制御装置30からの信号に基づいて第2の励起光を被照射体1に照射する(1105)。信号列抽出部31は、複数の異なる焦点における第2の画素信号の集合をセンサ5から取得する(1106)。ここでは、第2の励起光を被照射体1に照射してから第2の蛍光が出ているであろう時間帯の間の画素信号の集合を取得する。ここで取得される第2の画素信号の集合は、第2の蛍光の情報を含む場合もあるし、含まない場合もある。第2の画素信号の集合は、電荷蓄積開始指示がなされてから電荷蓄積終了指示がなされるまでの間にセンサ5の各受光素子が電荷を蓄積する(すなわち、電荷蓄積開始指示に応じてセンサ5の各受光素子が電荷蓄積を開始し、電荷蓄積終了指示に応じてセンサ5の各受光素子が電荷蓄積を終了する)蓄積動作と、蓄積動作を行った(蓄積動作後の)センサ5の各受光素子から電荷を読みだす(出力する)読出動作とを含む撮像動作が1回行われることで得られた画素信号の集合である。第1の画素信号の集合を得るための撮像動作と反射光の画素信号の集合を得るための撮像動作と第2の画素信号を得るための撮像動作は、それぞれ個別に行われる。第2の画素信号の集合を得るための撮像動作が行われる期間は、第1の画素信号の集合を得るための撮像動作が行われる期間と反射光の画素信号の集合を得るための撮像動作が行われる期間とのいずれとも重複しない。 The light source device 2 irradiates the irradiated body 1 with the second excitation light based on the signal from the control device 30 (1105). The signal string extraction unit 31 acquires a set of second pixel signals at a plurality of different focal points from the sensor 5 (1106). Here, a set of pixel signals during a time period in which the second fluorescence is emitted after irradiation of the second excitation light to the irradiated object 1 is acquired. The set of second pixel signals acquired here may or may not include the second fluorescence information. In the second set of pixel signals, each light receiving element of the sensor 5 accumulates charge between when the charge accumulation start instruction is given and when the charge accumulation end instruction is given (that is, the sensor according to the charge accumulation start instruction). 5 each of the light receiving elements 5 starts charge accumulation, and each light receiving element of the sensor 5 finishes charge accumulation in response to a charge accumulation end instruction), and the sensor 5 that has performed the accumulation operation (after the accumulation operation) This is a set of pixel signals obtained by performing an imaging operation including a reading operation of reading (outputting) charges from each light receiving element once. An imaging operation for obtaining a first set of pixel signals, an imaging operation for obtaining a set of reflected pixel signals, and an imaging operation for obtaining a second pixel signal are performed separately. The period during which the imaging operation for obtaining the second set of pixel signals is performed is the period during which the imaging operation for obtaining the first set of pixel signals is performed and the imaging operation for obtaining the set of reflected pixel signals. Does not overlap with any period during which
 ステージ制御部33は、反射光の画素信号の集合から任意の画素信号を抽出し、それらの画素信号の中に、所定のしきい値よりも大きい信号値を有する信号があるかを判定する(1107)。一例として、信号値は、輝度値である。一例として、上記しきい値は、焦点深度又はノイズレベルに基づいて予め設定してもよい。別の例として、ステージ制御部33は、反射光の画素信号の集合からコントラストを算出し、そのコントラストの値が所定のしきい値よりも大きいかを判定してもよい。なお、ステージ制御部33は、反射光に代えて、第1の蛍光又は第2の蛍光の画素信号の集合から任意の画素信号を抽出し、それらの画素信号の中に、所定のしきい値よりも大きい信号値を有する信号があるかを判定してもよい。被照射体1から第1の蛍光又は第2の蛍光が出ている場合には、第1の蛍光又は第2の蛍光の輝度値と所定のしきい値を比較することができる。 The stage control unit 33 extracts an arbitrary pixel signal from the set of reflected pixel signals, and determines whether any of the pixel signals has a signal value larger than a predetermined threshold value ( 1107). As an example, the signal value is a luminance value. As an example, the threshold value may be preset based on the depth of focus or the noise level. As another example, the stage control unit 33 may calculate contrast from a set of pixel signals of reflected light and determine whether the contrast value is greater than a predetermined threshold value. The stage control unit 33 extracts an arbitrary pixel signal from a set of pixel signals of the first fluorescence or the second fluorescence instead of the reflected light, and a predetermined threshold value is included in these pixel signals. It may be determined whether there is a signal having a larger signal value. When the first fluorescence or the second fluorescence is emitted from the irradiated body 1, the luminance value of the first fluorescence or the second fluorescence can be compared with a predetermined threshold value.
 上記条件を満たす場合、ステップ1109に進む。当該条件を満たさない場合、焦点位置が大幅にずれていると判定し、ステップ1108に進む。この場合、ステージ制御部33は、ステージ4を駆動する(1108)。ステージ制御部33は、ステージ4の高さ(Z方向)を変更又は調整する。一例として、ステージ制御部33は、予め決められた高さ分だけステージ4の高さを変更又は調整する。別の例として、ステージ制御部33は、第1の蛍光の色収差量ΔZ又は第2の蛍光の色収差量ΔZ分だけステージ4の高さを変更又は調整する。ステージ4の高さを変更又は調整した後、ステップ1101に戻る。 If the above condition is satisfied, the process proceeds to step 1109. If the condition is not satisfied, it is determined that the focal position is significantly shifted, and the process proceeds to step 1108. In this case, the stage control unit 33 drives the stage 4 (1108). The stage control unit 33 changes or adjusts the height (Z direction) of the stage 4. As an example, the stage control unit 33 changes or adjusts the height of the stage 4 by a predetermined height. As another example, the stage control unit 33 changes or adjusts the amount of chromatic aberration [Delta] Z 1 or chromatic aberration [Delta] Z height of only 2 minutes Stage 4 of the second fluorescence of the first fluorescent. After changing or adjusting the height of the stage 4, the process returns to Step 1101.
 信号列抽出部31は、複数の異なる焦点それぞれの画素信号の集合を有する反射光の画素信号の集合から、参照焦点位置Zを判定する(1109)。図12は、参照焦点位置Zを判定する方法の一例である。一例として、信号列抽出部31は、共通する焦点位置にある画素信号の信号列から画像を生成し、複数の焦点位置における複数の画像を保持する。信号列抽出部31は、複数の画像の情報を用いて以下の処理を実行する。なお、必ずしも画像として扱う必要はなく、画素信号の集合のまま以下で説明する処理を行ってもよい。 Signal string extraction unit 31 determines a set of pixel signals of the reflected light having a set of a plurality of different focus each pixel signal, a reference focus position Z 0 (1109). Figure 12 is an example of a method for determining the reference focus position Z 0. As an example, the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions. The signal sequence extraction unit 31 performs the following processing using information of a plurality of images. Note that it is not always necessary to treat the image as an image, and the processing described below may be performed as a set of pixel signals.
 一例として、信号列抽出部31は、各焦点位置Z-3~Zの画像を複数の領域に分ける。一例として、信号列抽出部31は、各焦点位置Z-3~Zの画像で同じ位置に対応する領域のコントラストを算出する。一例として、信号列抽出部は、各焦点位置Z-3~Zの画像の中央領域のコントラストを算出する(図12の網掛け部分)。なお、コントラストを算出する領域は、別の領域でもよいし、画像上の全ての領域でもよい。 As an example, the signal sequence extraction unit 31 divides an image of each focal position Z −3 to Z 3 into a plurality of regions. As an example, the signal sequence extraction unit 31 calculates the contrast of a region corresponding to the same position in the images at the respective focal positions Z −3 to Z 3 . As an example, the signal sequence extraction unit calculates the contrast of the central region of the images at the focal positions Z −3 to Z 3 (shaded portion in FIG. 12). The area for calculating the contrast may be another area or all areas on the image.
 図13は、反射光の画素信号の集合からコントラストを算出した結果の一例であり、各焦点位置Z-3~Zの画像の中央領域のコントラストを示す。信号列抽出部31は、焦点位置Z-3~Zの中から、コントラストが最大となる位置を参照焦点位置Zとして決定する。なお、図13の近似曲線の最大値が、ある2つのZ位置の間に位置する場合は、より近い方を参照焦点位置Zとして決定してもよい。 FIG. 13 is an example of the result of calculating the contrast from the set of pixel signals of reflected light, and shows the contrast of the central region of the image at each focal position Z −3 to Z 3 . The signal sequence extraction unit 31 determines a position where the contrast is maximum from the focal positions Z −3 to Z 3 as the reference focal position Z 0 . When the maximum value of the approximate curve in FIG. 13 is located between two Z positions, the closer one may be determined as the reference focal position Z 0 .
 次に、信号列抽出部31は、色収差情報格納部34から第1の蛍光の色収差量ΔZの情報を取得する。信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、第1の蛍光の合焦点に対応する画素信号の信号列を抽出する(1110)。 Next, the signal string extraction unit 31 acquires information on the first fluorescence chromatic aberration amount ΔZ 1 from the chromatic aberration information storage unit 34. The signal sequence extraction unit 31 extracts a signal sequence of a pixel signal corresponding to the focal point of the first fluorescence using the chromatic aberration amount ΔZ 1 of the first fluorescence from the reference focal position Z 0 (1110).
 図14は、複数の異なる焦点Z-3~Zにおける第1の画素信号の集合から、第1の蛍光の合焦点に対応する画素信号の信号列を取得する一例である。一例として、信号列抽出部31は、共通する焦点位置にある画素信号の信号列から画像を生成し、複数の焦点位置の複数の画像を保持する。信号列抽出部31は、複数の焦点位置Z-3~Zの画像から第1の蛍光の合焦点位置での画像を抽出する。一例として、信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、第1の蛍光の合焦点に対応する焦点位置Zの画像を抽出する。なお、必ずしも画像として扱う必要はなく、信号列抽出部31は、画素信号の集合から、焦点位置Zに対応する信号列を抽出してもよい。 FIG. 14 is an example of acquiring a signal sequence of pixel signals corresponding to the focal point of the first fluorescence from a set of first pixel signals at a plurality of different focal points Z −3 to Z 3 . As an example, the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions. The signal sequence extraction unit 31 extracts an image at the in-focus position of the first fluorescence from the images at the plurality of focus positions Z −3 to Z 3 . As an example, the signal sequence extraction unit 31 uses the first fluorescence chromatic aberration amount ΔZ 1 from the reference focal position Z 0 to extract an image of the focal position Z 2 corresponding to the focal point of the first fluorescence. It is not necessarily treated as an image, signal sequence extraction unit 31, from the set of pixel signals may extract a signal sequence corresponding to the focus position Z 2.
 次に、信号列抽出部31は、色収差情報格納部34から第2の蛍光の色収差量ΔZの情報を取得する。信号列抽出部31は、参照焦点位置Zからの第2の蛍光の色収差量ΔZを用いて、第2の蛍光の合焦点に対応する画素信号の信号列を抽出する(1111)。 Next, the signal sequence extraction unit 31 acquires information on the second fluorescence chromatic aberration amount ΔZ 2 from the chromatic aberration information storage unit 34. Signal string extracting unit 31, using the chromatic aberration amount [Delta] Z 2 of the second fluorescence from the reference focus position Z 0, and extracts the signal sequence of the pixel signals corresponding to the focal point of the second fluorescent (1111).
 図15は、複数の異なる焦点Z-3~Zにおける第2の画素信号の集合から、第2の蛍光の合焦点に対応する画素信号の信号列を取得する一例である。一例として、信号列抽出部31は、共通する焦点位置にある画素信号の信号列から画像を生成し、複数の焦点位置の複数の画像を保持する。信号列抽出部31は、複数の焦点位置Z-3~Zの画像から第2の蛍光の合焦点位置での画像を抽出する。一例として、信号列抽出部31は、参照焦点位置Zからの第2の蛍光の色収差量ΔZを用いて、第1の蛍光の合焦点に対応する焦点位置Z-2の画像を抽出する。なお、必ずしも画像として扱う必要はなく、信号列抽出部31は、画素信号の集合から、焦点位置Z-2に対応する信号列を抽出してもよい。 FIG. 15 is an example of acquiring a signal sequence of pixel signals corresponding to the focal point of the second fluorescence from a set of second pixel signals at a plurality of different focal points Z −3 to Z 3 . As an example, the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions. The signal sequence extraction unit 31 extracts an image at the focal point position of the second fluorescence from the images of the plurality of focal positions Z −3 to Z 3 . As an example, the signal sequence extraction unit 31 uses the second fluorescence chromatic aberration amount ΔZ 2 from the reference focal position Z 0 to extract an image at the focal position Z −2 corresponding to the focal point of the first fluorescence. . The signal sequence extraction unit 31 may extract the signal sequence corresponding to the focal position Z- 2 from the set of pixel signals.
 図14及び図15の例において、参照焦点位置Z+色収差量ΔZ又ΔZは、複数の異なる焦点Z-3~Zと一致しない場合もある。例えば、Z+ΔZが、Z2.4であったと仮定する。この場合、取得された画素信号の中には、Z2.4に対応する画素信号がない。このような場合、一例として、Z2.4は、Zと比較してZの方が近いため、より近いZに対応する信号列を抽出してもよい。別の例として、重み付け計算によって2つの焦点位置の情報からZ+ΔZ(=Z2.4)に対応する信号列を生成してもよい。例えば、信号列抽出部31は、Zの画素信号の信号値とZの画素信号の信号値のそれぞれに重み係数をかけて、Z2.4に対応する信号値を生成してもよい。一例として、重み付け平均を使用してもよい。Zの画素信号の信号値とZの画素信号の信号値にかける重み係数は、同じに設定してもよいし、参照焦点位置Zからの距離に応じて異なるように設定してもよい。 In the examples of FIGS. 14 and 15, the reference focal position Z 0 + the chromatic aberration amount ΔZ 1 or ΔZ 2 may not coincide with a plurality of different focal points Z −3 to Z 3 . For example, assume that Z 0 + ΔZ 1 was Z 2.4 . In this case, there is no pixel signal corresponding to Z 2.4 among the acquired pixel signals. In such a case, as an example, Z 2.4 is close found the following Z 2 as compared to Z 3, may extract a signal string corresponding to the closer Z 2. As another example, a signal sequence corresponding to Z 0 + ΔZ 1 (= Z 2.4 ) may be generated from information on two focal positions by weighting calculation. For example, the signal sequence extraction unit 31 may generate a signal value corresponding to Z 2.4 by multiplying each of the signal value of the Z 2 pixel signal and the signal value of the Z 3 pixel signal by a weighting factor. . As an example, a weighted average may be used. Weighting factor applied to the signal value of the pixel signals of the signal value and Z 3 of the pixel signals of Z 2 may be set to the same, it is set differently according to the distance from the reference focus position Z 0 Good.
 なお、図14及び図15の例では、反射光の合焦点位置が、第1の蛍光の合焦点位置と第2の合焦点位置の間にある場合で説明したが、反射光、第1の蛍光、及び第2の蛍光の関係はこの例に限定されない。一例として、反射光の合焦点位置が、第1の蛍光に比べて、正の方向側(+Z側)にあってもよい。別の例として、反射光の合焦点位置が、第2の蛍光に比べて、負の方向側(+Z側)にあってもよい。色収差情報格納部34の色収差情報が、反射光、第1の蛍光、及び第2の蛍光の関係を特定する情報を含んでいればよい。 In the examples of FIGS. 14 and 15, the case where the focal point position of the reflected light is between the focal point position of the first fluorescence and the second focal point position is described. The relationship between the fluorescence and the second fluorescence is not limited to this example. As an example, the focal point position of the reflected light may be on the positive direction side (+ Z side) as compared with the first fluorescence. As another example, the focal position of the reflected light may be on the negative direction side (+ Z side) as compared with the second fluorescence. The chromatic aberration information in the chromatic aberration information storage unit 34 only needs to include information for specifying the relationship between the reflected light, the first fluorescence, and the second fluorescence.
 次に、信号列処理部32は、抽出された第1の蛍光の合焦点に対応する画素信号の信号列に対して第1の信号処理を実行し、抽出された第2の蛍光の合焦点に対応する画素信号の信号列に対して第2の信号処理を実行する(1112)。信号列処理部32は、抽出された第1の蛍光の合焦点に対応する画素信号の信号列から第1の結果を生成し、抽出された第2の蛍光の合焦点に対応する画素信号の信号列から第2の結果を生成する。一例として、第1の結果は、第1の蛍光の合焦点に対応する画像(図14参照)である。一例として、第2の結果は、第2の蛍光の合焦点に対応する画像(図15参照)である。ステップ1110及びステップ1111の処理において各蛍光の合焦点に対応する画像が抽出された場合、信号列処理部32は、抽出された第1の蛍光の合焦点に対応する画像(図14参照)をそのまま表示装置40に出力し、抽出された第2の蛍光の合焦点に対応する画像(図15参照)をそのまま表示装置40に出力してもよい。ステップ1110及びステップ1111の処理が、画素信号の集合のままで実行された場合、信号列処理部32は、抽出された第1の蛍光の合焦点に対応する画素信号の信号列から第1の画像を生成し、抽出された第2の蛍光の合焦点に対応する画素信号の信号列から第2の画像を生成してもよい。別の例として、第1の結果は、抽出された第1の蛍光の合焦点に対応する画素信号の信号列に関して、第1の蛍光の画素信号の信号値を表す情報と、その第1の蛍光の画素信号に対応する被照射体1の位置情報とを含む。第2の結果は、抽出された第2の蛍光の合焦点に対応する画素信号の信号列に関して、第2の蛍光の画素信号の信号値を表す情報と、その第2の蛍光の画素信号に対応する被照射体1の位置情報とを含む。一例として、信号値を表す情報は、輝度値である。一例として、被照射体1の位置情報は、スポットのアドレスである。第1の結果及び第2の結果は、被照射体1と検体を反応させた結果を示しているとも言える。アッセイした結果は、第1の結果と第2の結果を含むとも言える。
 第1の結果及び第2の結果は、記憶装置に記憶される。記憶装置は、測定装置10の内部(例えば制御装置30の内部)にある記憶装置でもよいし、測定装置10の外部にある記憶装置(外部機器、外部デバイス)でもよい。測定装置10の外部にある記憶装置としては、サーバやプリンタ、携帯情報端末などが挙げられる。この場合、信号列処理部32により生成された第1の結果及び第2の結果は、制御装置30により外部機器に出力される。測定装置10と外部機器との通信は、有線通信でもよいし無線通信でもよい。制御装置30により出力された第1の結果及び第2の結果は、外部機器に記憶される。外部機器がプリンタの場合、第1の結果及び第2の結果が(例えば自動的に)紙に印刷して出力されるように構成してもよい。外部機器が携帯情報端末の場合、第1の結果及び第2の結果は、携帯情報端末の表示部に(例えば自動的に)表示されるように構成してもよい。
Next, the signal sequence processing unit 32 performs first signal processing on the signal sequence of the pixel signal corresponding to the extracted first fluorescence focal point, and extracts the extracted second fluorescence focal point. Second signal processing is executed for the signal sequence of pixel signals corresponding to (1112). The signal sequence processing unit 32 generates a first result from the signal sequence of the pixel signal corresponding to the extracted focal point of the first fluorescence, and the pixel sequence of the pixel signal corresponding to the extracted focal point of the second fluorescence. A second result is generated from the signal sequence. As an example, the first result is an image (see FIG. 14) corresponding to the focal point of the first fluorescence. As an example, the second result is an image (see FIG. 15) corresponding to the focal point of the second fluorescence. When an image corresponding to the focal point of each fluorescence is extracted in the processing of Step 1110 and Step 1111, the signal sequence processing unit 32 uses the extracted image corresponding to the focal point of the first fluorescence (see FIG. 14). The image may be output to the display device 40 as it is, and the extracted image (see FIG. 15) corresponding to the focal point of the second fluorescence may be output to the display device 40 as it is. When the processing of step 1110 and step 1111 is performed with the set of pixel signals as it is, the signal sequence processing unit 32 performs the first processing from the extracted signal sequence of the pixel signals corresponding to the focal point of the first fluorescence. An image may be generated, and a second image may be generated from a signal sequence of pixel signals corresponding to the extracted focal point of the second fluorescence. As another example, the first result includes information indicating the signal value of the first fluorescence pixel signal with respect to the signal sequence of the pixel signal corresponding to the focal point of the extracted first fluorescence, and the first result. And position information of the irradiated object 1 corresponding to the fluorescent pixel signal. The second result is that the information indicating the signal value of the second fluorescence pixel signal and the pixel signal of the second fluorescence are related to the signal sequence of the pixel signal corresponding to the focal point of the extracted second fluorescence. Corresponding position information of the irradiated object 1. As an example, the information representing the signal value is a luminance value. As an example, the position information of the irradiated object 1 is a spot address. It can be said that the 1st result and the 2nd result have shown the result of having made the to-be-irradiated body 1 and the sample react. It can be said that the assayed result includes the first result and the second result.
The first result and the second result are stored in the storage device. The storage device may be a storage device inside the measurement device 10 (for example, inside the control device 30) or a storage device (external device, external device) outside the measurement device 10. Examples of the storage device outside the measuring apparatus 10 include a server, a printer, and a portable information terminal. In this case, the first result and the second result generated by the signal sequence processing unit 32 are output to the external device by the control device 30. Communication between the measuring apparatus 10 and the external device may be wired communication or wireless communication. The first result and the second result output by the control device 30 are stored in the external device. When the external device is a printer, the first result and the second result may be printed on paper (for example, automatically) and output. When the external device is a portable information terminal, the first result and the second result may be displayed on the display unit of the portable information terminal (for example, automatically).
 図11のフローチャートに関して更なるステップを追加してもよい。第1の励起光の照射期間の前後に参照光の照射期間を設定し、第2の励起光の照射期間の前後に参照光の照射期間を設定してもよい。一例として、ステップ1101の前に、ステップ1103及び1104を追加し、ステップ1106の後に、ステップ1103及び1104を追加してもよい。この場合、まず、第1の励起光の照射の前で、反射光の多焦点における信号の集合(第1の集合)を取得し、次に、第1の励起光の照射の後かつ第2の励起光の照射の前で、反射光の多焦点における信号の集合(第2の集合)を取得し、第2の励起光の照射の後で、複数の焦点位置における反射光の画素信号の集合(第3の集合)を取得することになる。何らかの事情で被照射体1のZ方向にずれが生じた場合、第1~第3の集合を用いてZ方向のずれを検出できる。一例として、信号列抽出部31は、第1の集合から求めた参照焦点位置Z0Aと、第2の集合から求めた参照焦点位置Z0Bと、第3の集合から求めた参照焦点位置Z0Cとを比較してもよい。一例として、信号列抽出部31は、参照焦点位置Z0A~Z0Cが全て同じ値であるかを判定してもよい。参照焦点位置Z0A~Z0Cが異なる場合、信号列抽出部31は、参照焦点位置Z0A~Z0Cのうちどの参照焦点位置が異なるかを出力してもよい。ユーザは、どの時点でZ方向の位置のずれが生じたかを把握できる。 Additional steps may be added with respect to the flowchart of FIG. The reference light irradiation period may be set before and after the first excitation light irradiation period, and the reference light irradiation period may be set before and after the second excitation light irradiation period. As an example, steps 1103 and 1104 may be added before step 1101, and steps 1103 and 1104 may be added after step 1106. In this case, first, a set of signals (first set) at multiple focal points of the reflected light is acquired before the irradiation of the first excitation light, and then after the irradiation of the first excitation light and the second Before the excitation light irradiation, a set of signals (second set) at the multifocal point of the reflected light is obtained, and after the second excitation light irradiation, the pixel signals of the reflected light at the plurality of focus positions are obtained. A set (third set) is acquired. When a deviation occurs in the Z direction of the irradiated object 1 for some reason, the deviation in the Z direction can be detected using the first to third sets. As an example, the signal sequence extraction unit 31 obtains the reference focal position Z 0A obtained from the first set, the reference focal position Z 0B obtained from the second set, and the reference focal position Z 0C obtained from the third set. May be compared. As an example, the signal sequence extraction unit 31 may determine whether the reference focal positions Z 0A to Z 0C are all the same value. When the reference focal positions Z 0A to Z 0C are different, the signal sequence extraction unit 31 may output which of the reference focal positions Z 0A to Z 0C is different. The user can grasp at which point the position shift in the Z direction has occurred.
 なお、ΔZ=ΔZとなる場合もある。以上の実施形態によれば、第1の励起光と第2の励起光を時間的にずらして照射するため、ΔZ=ΔZの場合でも、第1の蛍光の合焦点に対応する画素信号の信号列及び第2の蛍光の合焦点に対応する画素信号の信号列を取得することができる。 In some cases, ΔZ 1 = ΔZ 2 . According to the above embodiment, since the first excitation light and the second excitation light are irradiated while being shifted in time, the pixel signal corresponding to the focal point of the first fluorescence even when ΔZ 1 = ΔZ 2 And a signal sequence of pixel signals corresponding to the focal point of the second fluorescence.
 図16及び図17を用いて被照射体1を測定する際の別のフローチャートを説明する。図16及び図17のステップ1002は、図11のステップ1101~1108から構成される。図16及び図17のステップ1003は、図11のステップ1109~1112から構成される。 FIG. 16 and FIG. 17 are used to explain another flowchart for measuring the irradiated object 1. Step 1002 in FIGS. 16 and 17 is composed of steps 1101 to 1108 in FIG. Step 1003 in FIGS. 16 and 17 is composed of steps 1109 to 1112 in FIG.
 図16の例では、まず、ステージ4に支持部材60(例えば、プレート)を配置する(1001)。次に、測定対象である被照射体1の信号の集合(第1の集合)を取得する(1002)。次に、第1の集合に対する信号処理(1003)を実行する間に、次の被照射体1への移動(1004)及び次の被照射体1の信号の集合(第2の集合)の取得(1002)を実施する。測定対象を次の被照射体1へ移動する操作は、ステージ4の駆動により行われる。図11では、信号処理(1003)の後に次の被照射体1への移動を行っていたが、図16の例では、信号処理(1003)と、次の被照射体1への移動(1004)及び次の被照射体1の信号の集合(第2の集合)の取得(1002)とを、並列で実行する。これにより、画像取得のスループットが向上し、短い時間で、より多くの被照射体1の情報を取得することができる。 In the example of FIG. 16, first, a support member 60 (for example, a plate) is placed on the stage 4 (1001). Next, a set of signals (first set) of the object 1 to be measured is acquired (1002). Next, during execution of signal processing (1003) for the first set, movement to the next irradiated body 1 (1004) and acquisition of a set of signals of the next irradiated body 1 (second set) (1002) is performed. The operation of moving the measurement object to the next irradiated object 1 is performed by driving the stage 4. In FIG. 11, the movement to the next irradiated object 1 is performed after the signal processing (1003). However, in the example of FIG. 16, the signal processing (1003) and the movement to the next irradiated object 1 (1004) are performed. ) And acquisition (1002) of the next set of signals (second set) of the irradiated object 1 are executed in parallel. Thereby, the throughput of image acquisition is improved, and more information on the irradiated object 1 can be acquired in a short time.
 図17の例では、まず、ステージ4に支持部材60(例えば、プレート)を配置する(1001)。次に、ある被照射体1の信号の集合を取得し(1002)、その後、次の被照射体1へ移動する(1004)。測定対象を次の被照射体1へ移動する操作は、ステージ4の駆動により行われる。ステップ1002とステップ1004の組み合わせを支持部材60上の最後の被照射体1まで繰り返す。最後に、全ての被照射体1の信号の集合に関して、まとめて信号処理を実施する(1003)。このフローでは、信号の集合を取得する(1002)ごとに信号処理(1003)を実施せず、予め全ての被照射体1の情報を取得する。これにより、短い時間で、より多くの被照射体1の情報を取得することができる。ユーザは、全ての被照射体1の信号の集合に関して、後からまとめて信号処理(1003)を実行すればよい。この構成は、ステップ1003を別の制御装置で行う場合に有利である。予め取得した全ての被照射体1の情報を、記録媒体又はネットワークを介して別の制御装置に転送する。別の制御装置が、全ての被照射体1の信号の集合に関して、まとめて信号処理(1003)を実施してもよい。 In the example of FIG. 17, first, a support member 60 (for example, a plate) is disposed on the stage 4 (1001). Next, a set of signals of a certain irradiated body 1 is acquired (1002), and then moved to the next irradiated body 1 (1004). The operation of moving the measurement object to the next irradiated object 1 is performed by driving the stage 4. The combination of step 1002 and step 1004 is repeated up to the last irradiated object 1 on the support member 60. Finally, signal processing is collectively performed on the set of signals of all the irradiated objects 1 (1003). In this flow, every time a set of signals is acquired (1002), signal processing (1003) is not performed, and information of all the irradiated objects 1 is acquired in advance. Thereby, more information of the irradiated object 1 can be acquired in a short time. The user may perform signal processing (1003) collectively for the set of signals of all the irradiated objects 1 later. This configuration is advantageous when step 1003 is performed by another control device. Information on all the irradiated objects 1 acquired in advance is transferred to another control device via a recording medium or a network. Another control device may collectively perform signal processing (1003) on a set of signals of all the irradiated objects 1.
 以上の実施形態によれば、AF機能を用いて焦点を合わせなくても、第1の励起光に対応する第1の蛍光の合焦点に対応する信号列、及び、第2の励起光に対応する第2の蛍光の合焦点に対応する信号列を得ることができる。従来では、波長が異なる2つの蛍光は互いに色収差量が異なるため、画像を取得する毎にAF機能を用いた焦点合わせ処理が必要であった。例えば、従来では、AF機能によって、各蛍光の画像を取得するごとに、光学系のレンズを駆動する又はステージの高さを変更するなどの操作が必要であった。したがって、レンズの駆動やステージの高さの変更にかかる時間が、画像取得のスループットに大きな影響を与えていた。また、被照射体の状況によっては、AF機能がエラーを起こす可能性もあった。これに対して、本実施形態では、第1の励起光を照射して画素信号を取得するときと、第2の励起光を照射して画素信号を取得するときとの間で、AF機能を用いて焦点を合わせなくてもよい。一例として、第1の励起光を照射して画素信号を取得するときと、第2の励起光を照射して画素信号を取得するときとの間で、焦点合わせのために光学系のレンズ(フォーカシングレンズ)を駆動する又はステージの高さを変更するなどの処理が必要ない。図11の例で説明すれば、ステップ1101~1106の間に、光学系のレンズ(フォーカシングレンズ)を駆動する又はステージの高さを変更するなどの操作が必要ない。本実施形態では、ステップ1101~1106の間にAF機能による焦点合わせが不要となり、取得した複数の焦点位置での画素信号の集合から第1の蛍光の合焦点に対応する信号列、及び、第2の蛍光の合焦点に対応する信号列を抽出することができる。したがって、画像取得のスループットが大幅に向上し、短い時間で、より多くの被照射体1の画像を取得することができる。 According to the above embodiment, the signal sequence corresponding to the focal point of the first fluorescence corresponding to the first excitation light and the second excitation light can be handled without focusing using the AF function. A signal sequence corresponding to the focal point of the second fluorescence can be obtained. Conventionally, two fluorescences having different wavelengths have different amounts of chromatic aberration, so that a focusing process using an AF function is required every time an image is acquired. For example, conventionally, an operation such as driving the lens of the optical system or changing the height of the stage is required each time an image of each fluorescence is acquired by the AF function. Therefore, the time required for driving the lens and changing the height of the stage has had a great influence on the throughput of image acquisition. Further, depending on the state of the irradiated object, the AF function may cause an error. In contrast, in the present embodiment, the AF function is used between when the pixel signal is acquired by irradiating the first excitation light and when the pixel signal is acquired by irradiating the second excitation light. It does not have to be focused. As an example, an optical system lens (for focusing) between when the pixel signal is acquired by irradiating the first excitation light and when the pixel signal is acquired by irradiating the second excitation light. Processing such as driving the focusing lens or changing the height of the stage is not necessary. In the example of FIG. 11, during steps 1101 to 1106, an operation such as driving a lens (focusing lens) of the optical system or changing the height of the stage is not necessary. In the present embodiment, focusing by the AF function is unnecessary during steps 1101 to 1106, and a signal sequence corresponding to the focal point of the first fluorescence from the set of acquired pixel signals at a plurality of focal positions, and the first A signal sequence corresponding to the focal point of the two fluorescences can be extracted. Therefore, the throughput of image acquisition is greatly improved, and more images of the irradiated object 1 can be acquired in a short time.
[第2実施形態]
 測定装置の第2実施形態について、図18~図21を参照して説明する。図18は、測定装置10によって被照射体1を測定する際のフローチャートの一例である。以下の例では、光源装置2が、参照光、第1の励起光、及び第2の励起光を同時に放射する。
[Second Embodiment]
A second embodiment of the measuring apparatus will be described with reference to FIGS. FIG. 18 is an example of a flowchart for measuring the irradiated object 1 by the measuring apparatus 10. In the following example, the light source device 2 emits reference light, first excitation light, and second excitation light simultaneously.
 まず、ステージ4に支持部材60を配置する(1801)。光源装置2は、制御装置30からの信号に基づいて、参照光、第1の励起光、及び第2の励起光を共に照射する(1802)。一例として、参照光の照射期間と第1の励起光の照射期間と第2の励起光の照射期間が重なっていればよい。一例として、参照光の照射期間と第1の励起光の照射期間と第2の励起光の照射期間が互いに少なくとも一部が重なり合えばよい。一例として、参照光の照射と第1の励起光の照射と第2の励起光の照射を同時に開始する。 First, the support member 60 is placed on the stage 4 (1801). The light source device 2 emits both the reference light, the first excitation light, and the second excitation light based on the signal from the control device 30 (1802). For example, the reference light irradiation period, the first excitation light irradiation period, and the second excitation light irradiation period may overlap. As an example, the reference light irradiation period, the first excitation light irradiation period, and the second excitation light irradiation period may at least partially overlap each other. As an example, irradiation with reference light, irradiation with first excitation light, and irradiation with second excitation light are started simultaneously.
 制御装置30の信号列抽出部31は、複数の異なる焦点における画素信号の集合をセンサ5から取得する(1803)。ここでの複数の異なる焦点における画素信号の集合は、反射光の画素信号の集合と、第1の画素信号の集合と、第2の画素信号の集合とを含む。この例では、参照光、第1の励起光、及び第2の励起光を共に照射するため、複数の異なる焦点における画素信号の集合は1つとなる。すなわち、同一の被写体1に対して1回の撮像動作を行うことで、反射光の画素信号の集合と第1の画素信号の集合と第2の画素信号の集合とが得られる。なお、第1実施形態では、参照光、第1の励起光、及び第2の励起光を異なる時間で照射するため、複数の異なる焦点における画素信号の集合が3つ取得される。第1実施形態では、同一の被照射体1に対して、反射光の画素信号の集合と第1の画素信号の集合と第2の画素信号の集合とを得るために3回の撮像動作を行う。すなわち、同一の被写体1に対して、反射光の画素信号の集合を得るために1回の撮像動作を行い、第1の画素信号の集合を得るために1回の撮像動作を行い、第2の画素信号の集合を得るために1回の撮像動作を行う。 The signal string extraction unit 31 of the control device 30 acquires a set of pixel signals at a plurality of different focal points from the sensor 5 (1803). The set of pixel signals at a plurality of different focal points here includes a set of reflected light pixel signals, a set of first pixel signals, and a set of second pixel signals. In this example, since the reference light, the first excitation light, and the second excitation light are irradiated together, the set of pixel signals at a plurality of different focal points is one. That is, by performing one imaging operation on the same subject 1, a set of reflected pixel signals, a set of first pixel signals, and a set of second pixel signals are obtained. In the first embodiment, since the reference light, the first excitation light, and the second excitation light are irradiated at different times, three sets of pixel signals at a plurality of different focal points are acquired. In the first embodiment, three imaging operations are performed on the same irradiated object 1 in order to obtain a set of pixel signals of reflected light, a set of first pixel signals, and a set of second pixel signals. Do. That is, for the same subject 1, one imaging operation is performed to obtain a set of reflected light pixel signals, one imaging operation is performed to obtain a first pixel signal set, and second In order to obtain a set of pixel signals, one imaging operation is performed.
 ステージ制御部33は、複数の異なる焦点における画素信号の集合から任意の画素信号を抽出し、それらの画素信号の中に、所定のしきい値よりも大きい信号値を有する信号があるかを判定する(1804)。一例として、信号値は輝度値である。当該条件を満たす場合、ステップ1806に進む。当該条件を満たさない場合、ステップ1805に進む。この場合、ステージ制御部33は、ステージ4を駆動する。一例として、ステージ制御部33は、ステージ4の高さ(Z方向)を変更又は調整する(1805)。ステップ1805の内容は、図11の1108と同様の内容である。 The stage control unit 33 extracts an arbitrary pixel signal from a set of pixel signals at a plurality of different focal points, and determines whether any of the pixel signals has a signal value larger than a predetermined threshold value. (1804). As an example, the signal value is a luminance value. If the condition is satisfied, the process proceeds to step 1806. If the condition is not satisfied, the process proceeds to step 1805. In this case, the stage control unit 33 drives the stage 4. As an example, the stage control unit 33 changes or adjusts the height (Z direction) of the stage 4 (1805). The content of step 1805 is the same as that of 1108 in FIG.
 信号列抽出部31は、複数の異なる焦点における画素信号の集合から、参照焦点位置Zを判定する(1806)。一例として、信号列抽出部31は、共通する焦点位置にある画素信号の信号列から画像を生成し、複数の焦点位置における複数の画像を保持する。信号列抽出部31は、複数の画像の情報を用いて以下の処理を実行する。なお、必ずしも画像として扱う必要はなく、画素信号の集合のまま以下で説明する処理を行ってもよい。 The signal sequence extraction unit 31 determines the reference focal position Z 0 from a set of pixel signals at a plurality of different focal points (1806). As an example, the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions. The signal sequence extraction unit 31 performs the following processing using information of a plurality of images. Note that it is not always necessary to treat the image as an image, and the processing described below may be performed as a set of pixel signals.
 一例として、信号列抽出部31は、各焦点位置の画像を複数の領域に分ける。一例として、信号列抽出部31は、各焦点位置の画像で同じ位置に対応する領域のコントラストを算出する。一例として、信号列抽出部31は、各焦点位置の画像の中央領域のコントラストを算出する。なお、コントラストを算出する領域は、別の領域でもよいし、画像上の全ての領域でもよい。 As an example, the signal sequence extraction unit 31 divides the image at each focal position into a plurality of regions. As an example, the signal sequence extraction unit 31 calculates the contrast of the region corresponding to the same position in the image at each focal position. As an example, the signal sequence extraction unit 31 calculates the contrast of the central region of the image at each focal position. The area for calculating the contrast may be another area or all areas on the image.
 図19は、参照焦点位置Zを判定する方法の一例である。図19は、焦点位置とコントラストとの関係を示す。反射光、第1の蛍光、及び第2の蛍光が、それぞれの合焦点位置において他の波長の光に実質的に影響を受けなければ、図19の近似曲線で示されるように、コントラストのグラフは、3つのピークを有する。一例として、信号列抽出部31は、焦点位置とコントラストとの関係を表すグラフから、3つのピークに対応する3つの焦点位置を抽出する。ピークの抽出は、公知の手法を用いることができる。一例として、焦点位置とコントラストの関係を表す近似曲線の一次微分又は二次微分の情報を用いてピークを抽出してもよい。 Figure 19 is an example of a method for determining the reference focus position Z 0. FIG. 19 shows the relationship between the focal position and contrast. If the reflected light, the first fluorescence, and the second fluorescence are not substantially affected by light of other wavelengths at the respective in-focus positions, a contrast graph is obtained as shown by the approximate curve in FIG. Has three peaks. As an example, the signal sequence extraction unit 31 extracts three focal positions corresponding to three peaks from a graph representing the relationship between the focal position and the contrast. A known method can be used for peak extraction. As an example, the peak may be extracted using information of the first derivative or second derivative of the approximate curve representing the relationship between the focal position and the contrast.
 一例として、色収差情報格納部34に格納されている色収差情報は、反射光の合焦点位置からの第1の蛍光の色収差量ΔZと、反射光の合焦点位置からの第2の蛍光の色収差量ΔZとを少なくとも含む。この例では、信号列抽出部31は、色収差情報を参照することにより、(i)第1の蛍光の合焦点位置が反射光の合焦点位置よりも正の方向側(+Z側)にあること、及び、(ii)第2の蛍光の合焦点位置が反射光の合焦点位置よりも負の方向側(-Z側)にあること、を判定できる。信号列抽出部31は、色収差情報を参照することにより、図19の3つのピークのうち、中央のピークが反射光の合焦点位置(すなわち、参照焦点位置Z)であると判定する。 As an example, the chromatic aberration information stored in the chromatic aberration information storage unit 34 includes the chromatic aberration amount ΔZ 1 of the first fluorescence from the focal position of the reflected light and the chromatic aberration of the second fluorescence from the focal position of the reflected light. and a quantity [Delta] Z 2 at least. In this example, the signal sequence extraction unit 31 refers to the chromatic aberration information so that (i) the in-focus position of the first fluorescence is on the positive direction side (+ Z side) with respect to the in-focus position of the reflected light. And (ii) it can be determined that the in-focus position of the second fluorescence is on the negative direction side (−Z side) with respect to the in-focus position of the reflected light. The signal string extraction unit 31 determines that the central peak among the three peaks in FIG. 19 is the focused position of the reflected light (that is, the reference focal position Z 0 ) by referring to the chromatic aberration information.
 次に、信号列抽出部31は、色収差情報格納部34から第1の蛍光の色収差量ΔZの情報を取得する。信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、第1の蛍光の合焦点に対応する画素信号の信号列を抽出する(1807)。 Next, the signal string extraction unit 31 acquires information on the first fluorescence chromatic aberration amount ΔZ 1 from the chromatic aberration information storage unit 34. The signal sequence extraction unit 31 extracts the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence by using the chromatic aberration amount ΔZ 1 of the first fluorescence from the reference focal position Z 0 (1807).
 図20は、複数の異なる焦点における画素信号の集合から、第1の蛍光の合焦点に対応する画素信号の信号列及び第2の蛍光の合焦点に対応する画素信号の信号列を取得する一例である。この例では、ΔZは、+Z側方向の4つ分のずれであり、ΔZは、-Z側方向の4つ分のずれであると仮定する。一例として、信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、焦点位置(Z+ΔZ=Z)における画像を抽出する。 FIG. 20 shows an example of acquiring a signal sequence of pixel signals corresponding to the focal point of the first fluorescence and a signal sequence of pixel signals corresponding to the focal point of the second fluorescence from a set of pixel signals at a plurality of different focal points. It is. In this example, it is assumed that ΔZ 1 is a shift of four in the + Z side direction, and ΔZ 2 is a shift of four in the −Z side direction. As an example, the signal sequence extracting unit 31 extracts an image at the focal position (Z 0 + ΔZ 1 = Z 4 ) using the chromatic aberration amount ΔZ 1 of the first fluorescence from the reference focal position Z 0 .
 次に、信号列抽出部31は、色収差情報格納部34から第2の蛍光の色収差量ΔZの情報を取得する。信号列抽出部31は、参照焦点位置Zからの第2の蛍光の色収差量ΔZを用いて、第2の蛍光の合焦点に対応する画素信号の信号列を抽出する(1808)。図16に示すように、一例として、信号列抽出部31は、参照焦点位置Zからの第2の蛍光の色収差量ΔZを用いて、焦点位置(Z+ΔZ=Z-4)における画像を抽出する。 Next, the signal sequence extraction unit 31 acquires information on the second fluorescence chromatic aberration amount ΔZ 2 from the chromatic aberration information storage unit 34. Signal string extracting unit 31, using the chromatic aberration amount [Delta] Z 2 of the second fluorescence from the reference focus position Z 0, and extracts the signal sequence of the pixel signals corresponding to the focal point of the second fluorescent (1808). As illustrated in FIG. 16, as an example, the signal sequence extraction unit 31 uses the second fluorescence chromatic aberration amount ΔZ 2 from the reference focal position Z 0, and at the focal position (Z 0 + ΔZ 2 = Z −4 ). Extract images.
 なお、参照焦点位置Z+色収差量ΔZ又ΔZは、複数の異なる焦点Z-5~Zと一致しない場合もある。例えば、Z+ΔZが、Z4.4であったと仮定する。この場合、取得された画素信号の中には、Z4.4に対応する画素信号がない。このような場合、一例として、Z4.4は、Zと比較してZの方が近いため、より近いZに対応する信号列を抽出してもよい。別の例として、重み付け計算によって2つの焦点位置の情報からZ+ΔZ(=Z4.4)に対応する信号列を生成してもよい。例えば、信号列抽出部31は、Z4の画素信号の信号値とZの画素信号の信号値のそれぞれに重み係数をかけて、Z4.4に対応する信号値を生成してもよい。一例として、重み付け平均を使用してもよい。Zの画素信号の信号値とZの画素信号の信号値にかける重み係数は、同じに設定してもよいし、参照焦点位置Zからの距離に応じて異なるように設定してもよい。 Note that the reference focal position Z 0 + chromatic aberration amount ΔZ 1 or ΔZ 2 may not coincide with a plurality of different focal points Z −5 to Z 5 . For example, assume that Z 0 + ΔZ 1 was Z 4.4 . In this case, there is no pixel signal corresponding to Z 4.4 among the acquired pixel signals. In such a case, as an example, Z 4.4 is close found the following Z 4 as compared to Z 5, may extract a signal string corresponding to the closer Z 4. As another example, a signal sequence corresponding to Z 0 + ΔZ 1 (= Z 4.4 ) may be generated from information on two focal positions by weighting calculation. For example, the signal sequence extraction unit 31, by multiplying a weighting factor to each of the signal value of the signal value of the pixel signal and the pixel signal of the Z 5 of Z4, may generate a signal value corresponding to Z 4.4. As an example, a weighted average may be used. Weighting factor applied to the signal value of the pixel signal of the pixel signal the signal values and Z 5 of Z 4 may be set to the same, it is set differently according to the distance from the reference focus position Z 0 Good.
 次に、信号列処理部32は、抽出された第1の蛍光の画素信号の信号列に対して第1の信号処理を実行し、抽出された第2の蛍光の画素信号の信号列に対して第2の信号処理を実行する(1809)。なお、ステップ1809の内容は、図11のステップ1112の同様の内容であるため、説明を省略する。 Next, the signal sequence processing unit 32 performs first signal processing on the signal sequence of the extracted first fluorescence pixel signal, and performs the extraction on the signal sequence of the extracted second fluorescence pixel signal. The second signal processing is then executed (1809). Note that the content of step 1809 is the same as that of step 1112 in FIG.
 なお、図18では図示を省略しているが、本実施形態に関しても、図10、図16、図17のいずれのフローチャートで実行されてもよい。 Although not shown in FIG. 18, this embodiment may also be executed in any of the flowcharts of FIGS. 10, 16, and 17.
 以上の実施形態によれば、光源装置2が、参照光、第1の励起光、及び第2の励起光を共に放射する場合でも、第1の励起光に対応する第1の蛍光の合焦点に対応する信号列、及び、第2の励起光に対応する第2の蛍光の合焦点に対応する信号列を得ることができる。例えば、励起光を照射して、それに対応する蛍光の十分な情報を得るのには数秒かかる場合もある。本実施形態では、参照光、第1の励起光、及び第2の励起光を共に放射するため、第1の励起光及び第2の励起光を順次照射して第1の蛍光の情報及び第2の蛍光の情報をそれぞれ取得する場合に比べて、第1の蛍光の情報及び第2の蛍光の情報を取得する時間を短くすることができる。 According to the above embodiment, even when the light source device 2 emits both the reference light, the first excitation light, and the second excitation light, the focal point of the first fluorescence corresponding to the first excitation light. And a signal sequence corresponding to the focal point of the second fluorescence corresponding to the second excitation light can be obtained. For example, it may take several seconds to irradiate the excitation light and obtain sufficient information on the corresponding fluorescence. In the present embodiment, since the reference light, the first excitation light, and the second excitation light are emitted together, the first excitation light and the second excitation light are sequentially irradiated to obtain the first fluorescence information and the first excitation light. The time for acquiring the first fluorescence information and the second fluorescence information can be shortened compared to the case of acquiring the second fluorescence information.
 なお、反射光、第1の蛍光、及び第2の蛍光が、それぞれの合焦点位置において他の波長の光に影響を受ける場合でも、第1の蛍光の合焦点に対応する信号列、及び、第2の蛍光の合焦点に対応する信号列を取得することが可能である。図21は、第2実施形態に係る測定装置の構成の別の例である。測定装置10は、被照射体1と第2の対物レンズ21との間に、反射光、第1の蛍光、及び第2の蛍光を分離する分離部22をさらに備えてもよい。一例として、分離部は、反射光と蛍光とを分離する分離光学素子(例えば、ダイクロイックミラー)を備えてもよい。一例として、分離部は、第1の蛍光と第2の蛍光とを分離するバンドパスフィルタ―(波長選択部)を備えてもよい。測定装置10は、第3の対物レンズ23と、第2のマイクロレンズアレイ24と、第2のセンサ25とを備える。一例として、第3の対物レンズ23の焦点面近傍に、第2のマイクロレンズアレイ24と第2のセンサ25とが当該順に配置される。第2のマイクロレンズアレイ24の構成は、マイクロレンズアレイ8と同様の構成である。第2のセンサ25の構成は、センサ5と同様の構成である。 Even when the reflected light, the first fluorescence, and the second fluorescence are affected by light of other wavelengths at the respective focal positions, a signal sequence corresponding to the focal point of the first fluorescence, and It is possible to acquire a signal sequence corresponding to the focal point of the second fluorescence. FIG. 21 is another example of the configuration of the measuring apparatus according to the second embodiment. The measurement apparatus 10 may further include a separation unit 22 that separates the reflected light, the first fluorescence, and the second fluorescence between the irradiation target 1 and the second objective lens 21. As an example, the separation unit may include a separation optical element (for example, a dichroic mirror) that separates reflected light and fluorescence. As an example, the separation unit may include a bandpass filter (wavelength selection unit) that separates the first fluorescence and the second fluorescence. The measuring apparatus 10 includes a third objective lens 23, a second microlens array 24, and a second sensor 25. As an example, the second microlens array 24 and the second sensor 25 are arranged in this order in the vicinity of the focal plane of the third objective lens 23. The configuration of the second microlens array 24 is the same as that of the microlens array 8. The configuration of the second sensor 25 is the same as that of the sensor 5.
 一例として、分離部22は、反射光及び第1の蛍光をセンサ5側へ導き、反射光及び第2の蛍光を第2のセンサ25へ導くように構成される。この構成によれば、信号列抽出部31は、各センサ5、25で取得した多焦点の反射光の画素信号の集合から、それぞれ、参照焦点位置Zを算出することができる。何らかの事情で、センサ5側への光路長と第2のセンサ25側への光路長にずれが生じる場合もある。光路量にずれが生じた場合でも、それぞれのセンサ5、25で受光した反射光の情報から求めた参照焦点位置Zを用いることにより、光路長のずれの影響なく、各蛍光の情報を取得することができる。 As an example, the separation unit 22 is configured to guide the reflected light and the first fluorescence to the sensor 5 side and guide the reflected light and the second fluorescence to the second sensor 25. According to this configuration, the signal string extraction unit 31 can calculate the reference focal position Z 0 from the set of pixel signals of the multifocal reflected light acquired by the sensors 5 and 25, respectively. For some reason, there may be a difference between the optical path length to the sensor 5 side and the optical path length to the second sensor 25 side. Even if the deviation in the optical path amounts occur, obtained by the use of the reference focus position Z 0 obtained from the information of the received reflected light in each sensor 5,25, without the influence of the deviation of the optical path length, the information of each fluorescent can do.
 なお、ΔZ=ΔZとなる場合もある。分離部を用いることにより、ΔZ=ΔZの場合でも、第1の蛍光の合焦点に対応する画素信号の信号列及び第2の蛍光の合焦点に対応する画素信号の信号列を取得することができる。 In some cases, ΔZ 1 = ΔZ 2 . By using the separation unit, even when ΔZ 1 = ΔZ 2 , the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence and the signal sequence of the pixel signal corresponding to the focal point of the second fluorescence are acquired. be able to.
 第3の励起光及び第4の励起光を使用する場合、測定装置10は、第3の蛍光及び第4の蛍光を分離するための第2の分離部(例えば、ダイクロイックミラー)を備えてもよい。 When using the third excitation light and the fourth excitation light, the measurement apparatus 10 may include a second separation unit (for example, a dichroic mirror) for separating the third fluorescence and the fourth fluorescence. Good.
 これにより、反射光、第1の蛍光、及び第2の蛍光が、それぞれの合焦点位置において他の波長の光に影響を受ける場合でも、第1の蛍光の合焦点での信号列、及び、第2の蛍光の合焦点での信号列を取得することができる。 Accordingly, even when the reflected light, the first fluorescence, and the second fluorescence are affected by light of other wavelengths at the respective focal positions, the signal sequence at the focal point of the first fluorescence, and A signal string at the focal point of the second fluorescence can be acquired.
[第3実施形態]
 測定装置の第3実施形態について、図22~図26を参照して説明する。図22は、測定装置10によって被照射体1を測定する際のフローチャートの一例である。図22の例では、被照射体1(例えば支持部材60により支持された複数の被照射体1のうちの1つの被照射体)は、第1の励起光、第2の励起光及び参照光の3つの光で照射される。図22の例では、光源装置2は、第1の励起光、参照光及び第2の励起光を、第1の励起光、参照光、第2の励起光の順に射出する。図22の例では、光源装置2が第1の励起光を射出する期間と、光源装置2が参照光を射出する期間と、光源装置2が第2の励起光を射出する期間とは重複しない。よって、被照射体1は、第1の励起光、参照光、第2の励起光の順に照射される。光源装置2が参照光を射出する期間は、光源装置2が第1の励起光を射出する期間と光源装置2が第2の励起光を射出する期間との間にある。なお、第1の励起光、第2の励起光及び参照光の射出順序は、これに限定されず、任意の順序としてもよい。
[Third Embodiment]
A third embodiment of the measuring apparatus will be described with reference to FIGS. FIG. 22 is an example of a flowchart for measuring the irradiated object 1 by the measuring apparatus 10. In the example of FIG. 22, the irradiated object 1 (for example, one irradiated object among a plurality of irradiated objects 1 supported by the support member 60) is the first excitation light, the second excitation light, and the reference light. It is irradiated with three lights. In the example of FIG. 22, the light source device 2 emits the first excitation light, the reference light, and the second excitation light in the order of the first excitation light, the reference light, and the second excitation light. In the example of FIG. 22, the period in which the light source device 2 emits the first excitation light, the period in which the light source device 2 emits the reference light, and the period in which the light source device 2 emits the second excitation light do not overlap. . Therefore, the irradiated object 1 is irradiated in the order of the first excitation light, the reference light, and the second excitation light. The period in which the light source device 2 emits the reference light is between the period in which the light source device 2 emits the first excitation light and the period in which the light source device 2 emits the second excitation light. Note that the emission order of the first excitation light, the second excitation light, and the reference light is not limited to this, and may be any order.
 図22のステップ2201~2209は、それぞれ、図11の1001、1101~1108と同様の内容を含む、説明を省略する。 Steps 2201 to 2209 in FIG. 22 include the same contents as 1001 and 1101 to 1108 in FIG.
 ステップ2208の後、信号列抽出部31は、複数の異なる焦点における反射光の画素信号の集合から、参照焦点位置を判定する(2210)。一例として、信号列抽出部31は、共通する焦点位置にある画素信号の信号列から画像を生成し、複数の焦点位置における複数の画像を保持する。信号列抽出部31は、複数の画像の情報を用いて以下の処理を実行する。なお、必ずしも画像として扱う必要はなく、画素信号の集合のまま以下の処理を行ってもよい。 After step 2208, the signal sequence extraction unit 31 determines a reference focal position from a set of pixel signals of reflected light at a plurality of different focal points (2210). As an example, the signal sequence extraction unit 31 generates an image from a signal sequence of pixel signals at a common focal position, and holds a plurality of images at a plurality of focal positions. The signal sequence extraction unit 31 performs the following processing using information of a plurality of images. Note that it is not always necessary to treat the image as an image, and the following processing may be performed with a set of pixel signals.
 一例として、信号列抽出部31は、各焦点位置の画像を複数の領域に分ける。図23は、各焦点位置Z-3~Zの画像を複数の領域に分けた一例である。一例として、信号列抽出部31は、複数の領域の全てについて各焦点位置でのコントラストを求め、各領域ごとに参照焦点位置を判定してもよい。信号列抽出部31は、各焦点位置Z-3~Zの画像で同じ位置に対応する領域のコントラストを算出する。一例として、信号列抽出部31は、焦点位置Z-3~Zの中から、コントラストが最大となる位置を参照焦点位置として決定する。信号列抽出部31は、各領域ごとにコントラストを算出し、各領域ごとに参照焦点位置を判定する。 As an example, the signal sequence extraction unit 31 divides the image at each focal position into a plurality of regions. FIG. 23 shows an example in which the images at the respective focal positions Z −3 to Z 3 are divided into a plurality of regions. As an example, the signal sequence extraction unit 31 may obtain the contrast at each focal position for all of the plurality of areas and determine the reference focal position for each area. The signal string extraction unit 31 calculates the contrast of the region corresponding to the same position in the images at the respective focal positions Z −3 to Z 3 . As an example, the signal sequence extraction unit 31 determines a position where the contrast is maximum from among the focus positions Z −3 to Z 3 as the reference focus position. The signal sequence extraction unit 31 calculates the contrast for each region, and determines the reference focal position for each region.
 例えば、ステージ4に配置された支持部材60が傾斜している場合を想定する。図24は、被照射体1の傾斜の一例である。この場合、被照射体1の右端に近い領域(第1の頂点1aを含む領域)と被照射体1の左端に近い領域(第3の頂点1cを含む領域)とでは、合焦点位置が異なる。したがって、信号列抽出部31は、各領域ごとに参照焦点位置を判定する。 For example, it is assumed that the support member 60 disposed on the stage 4 is inclined. FIG. 24 is an example of the inclination of the irradiated object 1. In this case, the in-focus position is different between a region near the right end of the irradiated body 1 (a region including the first vertex 1a) and a region near the left end of the irradiated body 1 (a region including the third vertex 1c). . Therefore, the signal sequence extraction unit 31 determines the reference focal position for each region.
 一例として、信号列抽出部31は、複数の領域のうち少なくとも3つの領域に関して、各焦点位置でのコントラストを求めてもよい。これは、少なくとも3つの点が決まれば、平面の傾きを特定することができるためである。一例として、信号列抽出部31は、各領域ごとに、コントラストが最大となる焦点位置を参照焦点位置と判定してもよい。 As an example, the signal sequence extraction unit 31 may obtain the contrast at each focal position with respect to at least three regions among a plurality of regions. This is because the inclination of the plane can be specified if at least three points are determined. As an example, the signal sequence extraction unit 31 may determine, for each region, the focal position where the contrast is maximum as the reference focal position.
 図24は、被照射体1の第1の頂点1aを含む領域での参照焦点位置をZとすると、被照射体1の第2の頂点1bを含む領域での参照焦点位置がZとなり、被照射体1の第3の頂点1cを含む領域での参照焦点位置がZとなる例である。以下では、これら3つの領域で参照焦点位置を算出する例で説明する。 24, when the reference focal point of the area that includes first apex 1a of the irradiation object 1 and Z 0, becomes Z 1 reference focal position in the region including the second apex 1b of the irradiation object 1 an example in which reference focal position in the region including the third apex 1c of the irradiation object 1 is Z 2. Hereinafter, an example in which the reference focal position is calculated in these three areas will be described.
 ステージ制御部33は、算出された全ての参照焦点位置(3つの参照焦点位置)の画素信号の信号値が所定のしきい値よりも大きいかを判定する(2211)。一例として、信号値は、輝度値である。一例として、ステージ制御部33は、3つの参照焦点位置の全ての輝度値が所定のしきい値を超えるかを判定する。1つでも輝度値がしきい値以下である場合、支持部材60が一定以上傾斜しているとみなしてもよい。支持部材60の傾きが一定以上あると、各領域ごとでの参照焦点位置が大きく異なり、結果として、第1の蛍光の合焦点に対応する信号列、及び、第2の蛍光の合焦点に対応する信号列を取得することが困難になる可能性がある。また、そもそも支持部材60がステージ4上において正しい状態(例えば、水平に)に配置されてない可能性もある。したがって、1つでも参照焦点位置の画素信号の信号値が所定のしきい値以下の場合、ステップ2209に進む。ステージ4がX軸回り及びY軸回りに回転可能なステージ(傾斜が変更可能なステージ)である場合、ステージ制御部33は、ステージ4を駆動し、ステージ4の傾斜を調節してもよい(2209)。上記の条件を満たす場合は、ステップ2212に進む。 The stage control unit 33 determines whether the signal values of the pixel signals of all the calculated reference focal positions (three reference focal positions) are larger than a predetermined threshold (2211). As an example, the signal value is a luminance value. As an example, the stage control unit 33 determines whether all the luminance values of the three reference focal positions exceed a predetermined threshold value. When at least one luminance value is equal to or less than the threshold value, the support member 60 may be regarded as being inclined at a certain level or more. If the inclination of the support member 60 is greater than or equal to a certain level, the reference focal position for each region differs greatly, and as a result, the signal sequence corresponding to the first fluorescent focal point and the second fluorescent focal point correspond to each other. It may be difficult to acquire a signal sequence to be performed. In addition, there is a possibility that the support member 60 is not arranged in a correct state (for example, horizontally) on the stage 4 in the first place. Accordingly, when even one signal value of the pixel signal at the reference focal position is equal to or smaller than the predetermined threshold value, the process proceeds to step 2209. When the stage 4 is a stage that can rotate around the X axis and the Y axis (a stage that can change the tilt), the stage control unit 33 may drive the stage 4 and adjust the tilt of the stage 4 ( 2209). If the above condition is satisfied, the process proceeds to step 2212.
 別の例として、ステージ制御部33は、1つでも参照焦点位置の画素信号の信号値が所定のしきい値以下である場合、表示装置40にエラーを表示してもよい。別の例として、ステージ制御部33は、1つでも参照焦点位置の画素信号の信号値が所定のしきい値以下の場合、エラーの内容をデータとして記憶装置に記録してもよい。 As another example, the stage control unit 33 may display an error on the display device 40 when the signal value of the pixel signal at the reference focal position is equal to or less than a predetermined threshold value. As another example, the stage controller 33 may record the content of the error as data in the storage device when the signal value of the pixel signal at the reference focal position is equal to or less than a predetermined threshold value.
 傾き判定の別の例として、ステージ制御部33は、算出された3つの参照焦点位置に対応する領域のコントラストを算出し、それらコントラストの値の全てが所定のしきい値よりも大きいかを判定してもよい。1つでもコントラストがしきい値を超えてない場合、支持部材60が一定以上傾斜しているとみなしてもよい。 As another example of the tilt determination, the stage control unit 33 calculates the contrast of the region corresponding to the calculated three reference focal positions, and determines whether all of the contrast values are larger than a predetermined threshold value. May be. If even one of the contrasts does not exceed the threshold value, the support member 60 may be regarded as being inclined more than a certain level.
 傾き判定の別の例として、信号列抽出部31は、複数の領域のうち3つの領域に関して参照焦点位置を判定し、各領域ごとの参照焦点位置から被照射体1の傾きを判定してもよい。被照射体1の各種寸法(縦寸法、横寸法、スポット間の距離など)は予めわかっているため、信号列抽出部31は、3つの参照焦点位置の情報と、被照射体1の各種寸法とを用いて、被照射体1の傾きを算出してもよい。一例として、ステージ制御部33は、算出された傾きが所定のしきい値よりも大きいかを判定してもよい。一例として、ステージ制御部33は、算出された傾きが所定のしきい値よりも大きい場合、ステージ4の高さ(Z方向)を変更又は調節してもよい。 As another example of the inclination determination, the signal sequence extraction unit 31 determines the reference focal position with respect to three areas out of a plurality of areas, and determines the inclination of the irradiated object 1 from the reference focal position for each area. Good. Since various dimensions (vertical dimension, horizontal dimension, distance between spots, etc.) of the irradiation object 1 are known in advance, the signal string extraction unit 31 includes information on three reference focal positions and various dimensions of the irradiation object 1. May be used to calculate the inclination of the irradiated object 1. As an example, the stage control unit 33 may determine whether the calculated inclination is larger than a predetermined threshold value. As an example, the stage control unit 33 may change or adjust the height (Z direction) of the stage 4 when the calculated inclination is larger than a predetermined threshold value.
 3つの領域以外の参照焦点位置の判定について説明する。信号列抽出部31は、支持部材60が傾斜していないと判定した場合、3つの領域以外の全ての領域での参照焦点位置を判定する。一例として、信号列抽出部31は、3つの参照焦点位置が同じであれば、3つの領域以外の全ての領域を同じ参照焦点位置と判定してもよい。この場合、その後に実施されるステップ2112~2114は、図11のステップ1110~1112と同じである。したがって、説明を省略する。 The determination of the reference focal position other than the three areas will be described. When it is determined that the support member 60 is not inclined, the signal sequence extraction unit 31 determines the reference focal position in all regions other than the three regions. As an example, if the three reference focal positions are the same, the signal sequence extraction unit 31 may determine all the areas other than the three areas as the same reference focal position. In this case, the subsequent steps 2112 to 2114 are the same as steps 1110 to 1112 in FIG. Therefore, the description is omitted.
 別の例として、信号列抽出部31は、算出した3つの参照焦点位置が異なる場合、それぞれの周辺の領域を同じ参照焦点位置と判定してもよい。例えば、被照射体1の第1の頂点1aを含む領域での参照焦点位置をZとすると、信号列抽出部31は、その領域の周辺のいくつかの領域の参照焦点位置をZと判定してもよい。被照射体1の第2の頂点1bを含む領域での参照焦点位置がZとすると、信号列抽出部31は、その領域の周辺のいくつかの領域の参照焦点位置をZと判定してもよい。被照射体1の第3の頂点1cを含む領域での参照焦点位置がZとすると、信号列抽出部31は、その領域の周辺のいくつかの領域の参照焦点位置をZと判定してもよい。 As another example, when the calculated three reference focus positions are different, the signal sequence extraction unit 31 may determine each peripheral region as the same reference focus position. For example, if the reference focal position in the area including the first vertex 1a of the irradiated object 1 is Z 0 , the signal sequence extraction unit 31 sets the reference focal positions of several areas around the area as Z 0 . You may judge. Referring focal position in the region including the second apex 1b of the irradiation object 1 is to Z 1, the signal sequence extraction unit 31 determines a reference focus position of some regions of the periphery of the area with Z 1 May be. Referring focal position in the region including the third apex 1c of the irradiation object 1 is to Z 2, the signal sequence extraction unit 31 determines a reference focus position of some regions of the periphery of the region and Z 2 May be.
 別の例として、信号列抽出部31は、算出された被照射体1の傾きに応じて、3つの領域以外の全ての領域での参照焦点位置を判定する。一例として、信号列抽出部31は、算出された傾きに応じて、隣接する領域間でのZ方向の変化量を求め、その変化量をもとに参照焦点位置を判定する。一例として、変化量をもとに算出された各領域のZ方向の値が、ZとZの中間値以下の場合は、参照焦点位置をZと判定し、ZとZの中間値より大きい場合は、参照焦点位置をZと判定してもよい。 As another example, the signal sequence extraction unit 31 determines reference focal positions in all regions other than the three regions according to the calculated inclination of the irradiated object 1. As an example, the signal sequence extraction unit 31 obtains a change amount in the Z direction between adjacent regions according to the calculated inclination, and determines a reference focal position based on the change amount. As an example, the value in the Z direction in each area calculated on the basis of the amount of change is less than or equal to the intermediate value of Z 0 and Z 1, a reference focus position determined to Z 0, the Z 0 and Z 1 larger than the intermediate value is a reference focus position may be determined to Z 1.
 図25は、被照射体1の複数の領域で参照焦点位置が異なる場合のステップ2212及び2213の処理を説明する図である。図25は、反射光の画素信号の集合、第1の画素信号の集合、及び第2の画素信号の集合を示す。 FIG. 25 is a diagram for explaining the processing in steps 2212 and 2213 when the reference focal positions are different in a plurality of regions of the irradiated object 1. FIG. 25 shows a set of reflected light pixel signals, a set of first pixel signals, and a set of second pixel signals.
 信号列抽出部31は、色収差情報格納部34から第1の蛍光の色収差量ΔZの情報を取得する。信号列抽出部31は、被照射体1の各領域ごとに、参照焦点位置からの第1の蛍光の色収差量ΔZを用いて、第1の蛍光の合焦点に対応する画素信号の信号列を抽出する(2212)。 The signal string extraction unit 31 acquires information on the first fluorescence chromatic aberration amount ΔZ 1 from the chromatic aberration information storage unit 34. The signal sequence extraction unit 31 uses the first fluorescence chromatic aberration amount ΔZ 1 from the reference focal position for each region of the irradiated object 1, and the signal sequence of the pixel signal corresponding to the focal point of the first fluorescence. Is extracted (2212).
 図25において、各画素信号の集合は、複数の異なる焦点Z-4~Zで取得されている。この例では、被照射体1の第1の頂点1aを含む複数の領域(被照射体1の右端の領域)の参照焦点位置をZとすると、被照射体1の第2の頂点1bを含む複数の領域(被照射体1の中央の領域)での参照焦点位置がZと判定され、被照射体1の第3の頂点1cを含む複数の領域(被照射体1の左端の領域)での参照焦点位置がZとして判定されたと想定する。図25の2501の網掛け部分が、各領域ごとの参照焦点位置を示す。 In FIG. 25, a set of each pixel signal is acquired at a plurality of different focal points Z −4 to Z 4 . In this example, if the reference focal position of a plurality of areas including the first vertex 1a of the irradiated object 1 (the rightmost area of the irradiated object 1) is Z 0 , the second vertex 1b of the irradiated object 1 is Referring focal position of a plurality of regions (central region of the irradiated object 1) containing it is determined that Z 1, a plurality of regions (left end in the region of the irradiation object 1 including a third apex 1c of the irradiation object 1 ) reference focal position in it is assumed to have been determined as Z 2. The shaded portion 2501 in FIG. 25 indicates the reference focal position for each region.
 被照射体1の第1の頂点1aを含む複数の領域に関して、信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、第1の画素信号の集合から、焦点位置Zの画素信号の信号列(2502のZの網掛け部分)を取得する(矢印2511)。被照射体1の第2の頂点1bを含む複数の領域に関して、信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、第1の画素信号の集合から、焦点位置Zの画素信号の信号列(2502のZの網掛け部分)を取得する(矢印2512)。被照射体1の第3の頂点1cを含む複数の領域に関して、信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、第1の画素信号の集合から、焦点位置Zの画素信号の信号列(2502のZの網掛け部分)を取得する(矢印2513)。以上のように抽出された信号列の組み合わせは、第1の蛍光の合焦点での蛍光像を表す信号列に対応する。 For a plurality of areas including a first apex 1a of the irradiation object 1, the signal sequence extraction unit 31, the first fluorescence from the reference focus position Z 0 with chromatic aberration amount [Delta] Z 1, of the first pixel signal from the set, to acquire a signal sequence of the pixel signals of the focus position Z 2 (hatched portion of Z 2 of 2502) (arrow 2511). For a plurality of regions including a second vertex 1b of the irradiation object 1, the signal sequence extraction unit 31, the first fluorescence from the reference focus position Z 1 using chromatic aberration amount [Delta] Z 1, of the first pixel signal from the set, to acquire a signal sequence of the pixel signals of the focus position Z 3 (shaded portion of Z 3 of 2502) (arrow 2512). For a plurality of areas including a third apex 1c of the irradiation object 1, the signal sequence extraction unit 31, the first fluorescence from the reference focus position Z 2 using chromatic aberration amount [Delta] Z 1, of the first pixel signal from the set, to acquire a signal sequence of the pixel signals of the focus position Z 4 (shaded portion of Z 4 of 2502) (arrow 2513). The combination of the signal sequences extracted as described above corresponds to a signal sequence representing a fluorescence image at the focal point of the first fluorescence.
 次に、信号列抽出部31は、色収差情報格納部34から第2の蛍光の色収差量ΔZの情報を取得する。信号列抽出部31は、被照射体1の各領域ごとに、参照焦点位置からの第2の蛍光の色収差量ΔZを用いて、第2の蛍光の合焦点に対応する画素信号の信号列を抽出する(2213)。 Next, the signal sequence extraction unit 31 acquires information on the second fluorescence chromatic aberration amount ΔZ 2 from the chromatic aberration information storage unit 34. The signal string extraction unit 31 uses the second fluorescence chromatic aberration amount ΔZ 2 from the reference focal position for each region of the irradiated object 1, and the signal string of the pixel signal corresponding to the focal point of the second fluorescence. Is extracted (2213).
 被照射体1の第1の頂点1aを含む複数の領域に関して、信号列抽出部31は、参照焦点位置Zからの第2の蛍光の色収差量ΔZを用いて、第2の画素信号の集合から、焦点位置Z-2の画素信号の信号列(2503のZ-2の網掛け部分)を取得する(矢印2521)。被照射体1の第2の頂点1bを含む複数の領域に関して、信号列抽出部31は、参照焦点位置Zからの第1の蛍光の色収差量ΔZを用いて、第2の画素信号の集合から、焦点位置Z-1の画素信号の信号列(2503のZ-1の網掛け部分)を取得する(矢印2522)。被照射体1の第3の頂点1cを含む複数の領域に関して、信号列抽出部31は、参照焦点位置Zからの第2の蛍光の色収差量ΔZを用いて、第2の画素信号の集合から、焦点位置Zの画素信号の信号列(2503のZの網掛け部分)を取得する(矢印2523)。以上のように抽出された信号列の組み合わせは、第2の蛍光の合焦点での蛍光像を表す信号列に対応する。 For a plurality of areas including a first apex 1a of the irradiation object 1, the signal sequence extraction unit 31, the second fluorescence from the reference focus position Z 0 with chromatic aberration amount [Delta] Z 2, of the second pixel signal From the set, the signal sequence of the pixel signal at the focal position Z- 2 (the shaded portion of Z- 2 in 2503) is acquired (arrow 2521). For a plurality of regions including a second vertex 1b of the irradiation object 1, the signal sequence extraction unit 31, the first fluorescence from the reference focus position Z 1 using chromatic aberration amount [Delta] Z 2, of the second pixel signal From the set, a signal sequence of the pixel signal at the focal position Z −1 (the shaded portion of Z −1 in 2503) is acquired (arrow 2522). For a plurality of areas including a third apex 1c of the irradiation object 1, the signal sequence extraction unit 31, the second fluorescence from the reference focus position Z 2 using chromatic aberration amount [Delta] Z 2, of the second pixel signal From the set, the signal sequence of the pixel signal at the focal position Z 0 (the shaded portion of Z 0 in 2503) is acquired (arrow 2523). The combination of the signal sequences extracted as described above corresponds to a signal sequence representing a fluorescence image at the focal point of the second fluorescence.
 次に、信号列処理部32は、抽出された第1の蛍光の合焦点に対応する画素信号の信号列に対して第1の信号処理を実行し、抽出された第2の蛍光の合焦点に対応する画素信号の信号列に対して第2の信号処理を実行する(2214)。なお、ステップ2214の内容は、図11のステップ1112の同様の内容であるため、説明を省略する。 Next, the signal sequence processing unit 32 performs first signal processing on the signal sequence of the pixel signal corresponding to the extracted first fluorescence focal point, and extracts the extracted second fluorescence focal point. Second signal processing is executed for the signal sequence of pixel signals corresponding to (2214). Note that the content of step 2214 is the same as that of step 1112 in FIG.
 図26は、参照焦点位置を算出する3つの点の別の例である。参照焦点位置を算出する3つの点は、被照射体1の2つの頂点と、被照射体1の辺の中点とを結ぶ二等辺三角形で定義してもよい。これら3つの点が決まれば、平面の傾きを特定することができる。この例では、被照射体1の2つの頂点での参照焦点位置がZと判定され、被照射体1の辺の中点での参照焦点位置がZと判定されている。 FIG. 26 is another example of three points for calculating the reference focal position. The three points for calculating the reference focal position may be defined by isosceles triangles connecting the two vertices of the irradiated object 1 and the midpoints of the sides of the irradiated object 1. If these three points are determined, the inclination of the plane can be specified. In this example, the reference focus position of the two vertices of the irradiation object 1 is determined to Z 1, reference focus position at the midpoint of the irradiated body 1 side is determined to Z 0.
 なお、図22では図示を省略しているが、本実施形態に関しても、図10、図16、図17のいずれのフローチャートで実行されてもよい。 Although not shown in FIG. 22, this embodiment may be executed in any of the flowcharts of FIG. 10, FIG. 16, and FIG.
 以上の実施形態によれば、被照射体1の傾きを判定しつつ、第1の蛍光の合焦点での信号列、及び、第2の蛍光の合焦点での信号列を取得することができる。被照射体1が一定以上傾いていることを自動的に検知し、被照射体1の傾斜を調整することもできる。従来では、被照射体1の傾きを判定できなかった。また、従来では、AF機能を用いてある特定の焦点位置(すなわち、1つの焦点位置)に合わせて蛍光像を取得していたため、被照射体が傾いていた場合には被照射体の全ての領域で合焦点での情報が得られない可能性があった。本実施形態によれば、被照射体1の傾きを判定し、被照射体1の各領域ごとで合焦点位置を判定することができる。被照射体1の各領域で参照焦点位置が異なる場合でも、合焦点に対応する画素信号の信号列を取得することができる。 According to the above embodiment, the signal sequence at the focal point of the first fluorescence and the signal sequence at the focal point of the second fluorescence can be acquired while determining the inclination of the irradiated object 1. . It is also possible to automatically detect that the irradiated object 1 is inclined more than a certain level and adjust the inclination of the irradiated object 1. Conventionally, the inclination of the irradiated object 1 cannot be determined. Conventionally, since the fluorescence image is acquired in accordance with a specific focal position (that is, one focal position) using the AF function, when the irradiated body is inclined, all of the irradiated body is There was a possibility that information at the focal point could not be obtained in the area. According to this embodiment, it is possible to determine the tilt of the irradiated object 1 and determine the focal point position for each region of the irradiated object 1. Even when the reference focal position is different in each region of the irradiated object 1, a signal sequence of pixel signals corresponding to the focal point can be acquired.
 なお、本実施形態は、参照光、第1の励起光、及び第2の励起光を共に放射する構成にも適用可能である。参照光、第1の励起光、及び第2の励起光を共に放射した場合、信号列抽出部31は、被照射体1の少なくとも3つの領域に関してコントラストの変化を求め、各領域ごとに3つのピークを判定すればよい。信号列抽出部31は、色収差情報を用いて、3つのピークのうちの1つを参照焦点位置と判定すればよい。 In addition, this embodiment is applicable also to the structure which radiates | emits both reference light, 1st excitation light, and 2nd excitation light. When both the reference light, the first excitation light, and the second excitation light are emitted, the signal string extraction unit 31 obtains a change in contrast with respect to at least three regions of the irradiated object 1, What is necessary is just to determine a peak. The signal sequence extraction unit 31 may determine one of the three peaks as the reference focal position using the chromatic aberration information.
[第4実施形態]
 本実施形態では、光源装置2が、参照光を射出せずに第1の励起光と第2の励起光とを射出する構成について説明する。以下の例では、光源装置2が、第1の励起光及び第2の励起光を、第1の励起光、第2の励起光の順に射出する。なお、第1の励起光第2の励起光の射出順序は、これに限定されず、第2の励起光、第1の励起光の順でもよい。
[Fourth Embodiment]
In the present embodiment, a configuration in which the light source device 2 emits the first excitation light and the second excitation light without emitting the reference light will be described. In the following example, the light source device 2 emits the first excitation light and the second excitation light in the order of the first excitation light and the second excitation light. Note that the emission order of the first excitation light and the second excitation light is not limited to this, and may be the order of the second excitation light and the first excitation light.
 信号列抽出部31は、センサ5から、複数の異なる焦点における第1の画素信号の集合、及び、複数の異なる焦点における第2の画素信号の集合を取得する。次に、一例として、信号列抽出部31は、複数の異なる焦点における第1の画素信号の集合から、第1の蛍光の合焦点位置を判定する。一例として、信号列抽出部31は、複数の異なる焦点における第1の画素信号の集合から、最も高い輝度値を有する画素信号の焦点位置を第1の蛍光の合焦点位置として判定する。十分な輝度値を有する画素信号が得られれば、輝度値が最大となる焦点位置を、第1の蛍光の合焦点位置とみなすことが可能である。信号列抽出部31は、ここで判定された第1の蛍光の合焦点位置に対応する画素信号の信号列を取得する。 The signal string extraction unit 31 acquires from the sensor 5 a set of first pixel signals at a plurality of different focal points and a set of second pixel signals at a plurality of different focal points. Next, as an example, the signal string extraction unit 31 determines the focal point position of the first fluorescence from a set of first pixel signals at a plurality of different focal points. As an example, the signal sequence extraction unit 31 determines the focal position of the pixel signal having the highest luminance value as the focal point position of the first fluorescence from the set of first pixel signals at a plurality of different focal points. If a pixel signal having a sufficient luminance value is obtained, the focal position where the luminance value is maximum can be regarded as the focal point position of the first fluorescence. The signal sequence extraction unit 31 acquires a signal sequence of pixel signals corresponding to the in-focus position of the first fluorescence determined here.
 色収差情報格納部34は、第1の蛍光と第2の蛍光のとの間の色収差量の関係を表す情報を含む。この関係を用いて、第1の蛍光の合焦点位置から第2の蛍光の合焦点位置を求めることが可能である。一例として、信号列抽出部31は、第1の蛍光の合焦点位置からの第2の蛍光の色収差量を用いて、第2の蛍光の合焦点位置に対応する画素信号の信号列を抽出する。ここで抽出された画素信号の信号列は、実質的に第2の蛍光の合焦点での蛍光像を表す信号列に対応する。 The chromatic aberration information storage unit 34 includes information indicating the relationship of the chromatic aberration amount between the first fluorescence and the second fluorescence. Using this relationship, it is possible to obtain the in-focus position of the second fluorescence from the in-focus position of the first fluorescence. As an example, the signal sequence extraction unit 31 uses the amount of chromatic aberration of the second fluorescence from the focus position of the first fluorescence to extract the signal sequence of the pixel signal corresponding to the focus position of the second fluorescence. . The signal sequence of the pixel signals extracted here substantially corresponds to a signal sequence that represents a fluorescence image at the focal point of the second fluorescence.
 なお、信号列抽出部31は、最も高い輝度値の値が所定のしきい値より小さい場合、十分な輝度値を有する画素信号が得られなかったと判定してもよい。検体によっては、被照射体1から蛍光が発しない場合もある。十分な輝度値を有する画素信号が得られなかったという情報も、ユーザに対して有効な情報となり得る。 Note that the signal string extraction unit 31 may determine that a pixel signal having a sufficient luminance value has not been obtained when the highest luminance value is smaller than a predetermined threshold value. Depending on the specimen, fluorescence may not be emitted from the irradiated object 1. Information that a pixel signal having a sufficient luminance value cannot be obtained can also be effective information for the user.
 以上の実施形態によれば、参照光を用いない場合でも、AF機能による焦点合わせを実行することなく、第1の蛍光の合焦点に対応する信号列、及び、第2の蛍光の合焦点に対応する信号列を取得することができる。 According to the above embodiment, even when the reference light is not used, the signal sequence corresponding to the focal point of the first fluorescence and the focal point of the second fluorescence are not performed without performing focusing by the AF function. A corresponding signal sequence can be acquired.
 別の例として、信号列抽出部31は、第2の蛍光の画素信号の集合から、第2の蛍光の合焦点位置を判定し、第2の蛍光の合焦点位置からの第1の蛍光の色収差量を用いて、第1の蛍光の画素信号の信号列を抽出してもよい。 As another example, the signal sequence extraction unit 31 determines the focal point position of the second fluorescence from the set of pixel signals of the second fluorescence, and the first fluorescence from the focal point position of the second fluorescence. The signal sequence of the first fluorescence pixel signal may be extracted using the amount of chromatic aberration.
 本実施形態は、光源装置2が第1の励起光及び第2の励起光を同時に放射する構成にも適用可能である。一例として、測定装置10が、被照射体1とマイクロレンズアレイ8との間に、第1の蛍光及び第2の蛍光を分離する分離部を設けてもよい。測定装置10は、第3の対物レンズと、第2のマイクロレンズアレイと、第2のセンサとを備えてもよい。一例として、第3の対物レンズの焦点面近傍に、第2のマイクロレンズアレイと第2のセンサとが当該順に配置される。第2のマイクロレンズアレイの構成は、マイクロレンズアレイ8と同様の構成である。第2のセンサの構成は、センサ5と同様の構成である。一例として、分離部は、第1の蛍光をセンサ5側へ導き、第2の蛍光を第2のセンサへ導くように構成される。 This embodiment can also be applied to a configuration in which the light source device 2 emits the first excitation light and the second excitation light simultaneously. As an example, the measurement apparatus 10 may provide a separation unit that separates the first fluorescence and the second fluorescence between the irradiated object 1 and the microlens array 8. The measuring apparatus 10 may include a third objective lens, a second microlens array, and a second sensor. As an example, the second microlens array and the second sensor are arranged in this order near the focal plane of the third objective lens. The configuration of the second microlens array is the same as that of the microlens array 8. The configuration of the second sensor is the same as that of the sensor 5. As an example, the separation unit is configured to guide the first fluorescence to the sensor 5 side and guide the second fluorescence to the second sensor.
 別の例として、信号列抽出部31が、複数の異なる焦点位置の画素信号の集合からコントラストを求め、2つのコントラストのピークを判定してもよい。 As another example, the signal sequence extraction unit 31 may obtain a contrast from a set of pixel signals at a plurality of different focal positions and determine two contrast peaks.
[第5実施形態]
 図27は、上記の実施形態の測定装置10を備える測定システム(スクリーニング装置)を示す図である。測定システムは、上記に説明した被照射体1の測定方法を自動的に行うシステムである。測定システム100は、前処理装置(反応装置、バイオアッセイ装置)101と、搬送装置(プレートローダ)102と、測定装置103とを備えている。
[Fifth Embodiment]
FIG. 27 is a diagram illustrating a measurement system (screening apparatus) including the measurement apparatus 10 according to the above-described embodiment. The measurement system is a system that automatically performs the measurement method of the irradiated object 1 described above. The measurement system 100 includes a pretreatment device (reaction device, bioassay device) 101, a transport device (plate loader) 102, and a measurement device 103.
 前処理装置101は、測定対象の被照射体1を用意するバイオアッセイ装置である。一例として、前処理装置101は、被照射体1(スポットに配置された生体分子)に対して、標識された標的を含む検体(ターゲット)を注入して、生体分子と標的とに特異的な反応を行わせる装置である。一例として、前処理装置101は、支持部材60(例えば、プレート)を支持するステージ装置と、各スポットに対して検体を注入する分注ノズルを備えた分注装置と、検体注入後の被照射体を洗浄する洗浄装置と、を備える。 The pretreatment apparatus 101 is a bioassay apparatus that prepares an object 1 to be measured. As an example, the pretreatment apparatus 101 injects a specimen (target) containing a labeled target into the irradiated object 1 (biomolecule arranged in the spot), and is specific to the biomolecule and the target. It is an apparatus for carrying out the reaction. As an example, the pretreatment device 101 includes a stage device that supports a support member 60 (for example, a plate), a dispensing device that includes a dispensing nozzle that injects a sample into each spot, and an irradiation target after the sample is injected. A cleaning device for cleaning the body.
 前処理装置101は、洗浄後の被照射体を乾燥させる乾燥装置を備えてもよい。前処理装置101は、支持部材60を一枚ずつ処理する構成でも、複数枚同時に処理する構成であってもよい。 The pretreatment device 101 may include a drying device that dries the irradiated object after cleaning. The pre-processing apparatus 101 may be configured to process the support members 60 one by one or may be configured to process a plurality of sheets simultaneously.
 搬送装置102は、支持部材60を前処理装置101から測定装置103へ搬送する搬送機構である。搬送装置102としては、公知の搬送ロボット装置を用いることができる。搬送装置102は、前処理装置101のステージ装置から支持部材60を搬出し、測定装置103へ搬入する。搬送装置102は、洗浄後の支持部材60を一時的に待機させる機構が設けられていてもよい。 The transport device 102 is a transport mechanism that transports the support member 60 from the pretreatment device 101 to the measurement device 103. A known transfer robot device can be used as the transfer device 102. The transport apparatus 102 unloads the support member 60 from the stage apparatus of the pretreatment apparatus 101 and loads it into the measurement apparatus 103. The transport apparatus 102 may be provided with a mechanism for temporarily waiting for the support member 60 after cleaning.
 測定装置103は、上述した実施形態の測定装置10を備えている。測定装置103は、搬送装置102によりステージ4に配置された支持部材60の被照射体1の測定を行う。測定装置103による測定過程は、先に説明したとおりである。搬送装置102は、測定が終了した支持部材60をステージ4から搬出し、所定の位置へ搬送する。 The measuring apparatus 103 includes the measuring apparatus 10 according to the above-described embodiment. The measuring device 103 measures the irradiated object 1 of the support member 60 disposed on the stage 4 by the transport device 102. The measurement process by the measurement apparatus 103 is as described above. The conveyance device 102 carries out the support member 60 whose measurement has been completed from the stage 4 and conveys it to a predetermined position.
 以上の測定システム100によれば、被照射体1に対する前処理(反応処理、バイオアッセイ)と、前処理(反応処理、バイオアッセイ)後の被照射体1に対する測定処理とを連続的に行い生体分子アレイをスクリーニングすることができる。 According to the measurement system 100 described above, the pretreatment (reaction process, bioassay) for the irradiated object 1 and the measurement process for the irradiated object 1 after the pretreatment (reaction process, bioassay) are continuously performed. Molecular arrays can be screened.
 以上、添付図面を参照しながら本発明に係る好適な実施形態について説明したが、本発明は係る例に限定されないことは言うまでもない。上述した例において示した各構成要素の組み合わせ等は一例であって、本発明の主旨から逸脱しない範囲において設計要求等に基づき種々変更可能である。 As described above, the preferred embodiments according to the present invention have been described with reference to the accompanying drawings, but it goes without saying that the present invention is not limited to such examples. The combination of each component shown in the example mentioned above is an example, and can be variously changed based on a design request etc. in the range which does not deviate from the main point of this invention.
 一例として、上述した第1実施形態~第4実施形態を適宜組み合わせることもできる。制御装置30は、上述した第1実施形態~第4実施形態の処理を切り替えるための切替処理部を備えてもよい。切替処理部により、第1実施形態~第4実施形態のいずれかの処理を選択することが可能となる。 As an example, the above-described first to fourth embodiments can be appropriately combined. The control device 30 may include a switching processing unit for switching the processes of the first to fourth embodiments described above. The switching processing unit can select one of the processes in the first to fourth embodiments.
 上述した制御装置30の処理は、それらの機能を実現するソフトウェアのプログラムコードによっても実現できる。この場合、プログラムコードを記録した記憶媒体をシステム或は装置に提供し、そのシステム或は装置のコンピュータ(又はCPUやMPU)が記憶媒体に格納されたプログラムコードを読み出す。この場合、記憶媒体から読み出されたプログラムコード自体が前述した実施形態の機能を実現することになり、そのプログラムコード自体、及びそれを記憶した記憶媒体は本発明を構成することになる。このようなプログラムコードを供給するための記憶媒体としては、例えば、フレキシブルディスク、CD-ROM、DVD-ROM、ハードディスク、光ディスク、光磁気ディスク、CD-R、磁気テープ、不揮発性のメモリカード、ROMなどが用いられる。 The processing of the control device 30 described above can also be realized by software program codes that realize these functions. In this case, a storage medium in which the program code is recorded is provided to the system or apparatus, and the computer (or CPU or MPU) of the system or apparatus reads the program code stored in the storage medium. In this case, the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the program code itself and the storage medium storing the program code constitute the present invention. As a storage medium for supplying such program code, for example, a flexible disk, CD-ROM, DVD-ROM, hard disk, optical disk, magneto-optical disk, CD-R, magnetic tape, nonvolatile memory card, ROM Etc. are used.
 ここで述べたプロセス及び技術は本質的に如何なる特定の装置に関連することはなく、コンポーネントの如何なる相応しい組み合わせによってでも実装できる。更に、汎用目的の多様なタイプのデバイスが使用可能である。ここで述べた処理を実行するのに、専用の装置を構築するのが有益である場合もある。つまり、上述した制御装置30の一部が、例えば集積回路等の電子部品を用いたハードウェアにより実現されてもよい。 The processes and techniques described here are not inherently related to any particular equipment and can be implemented by any suitable combination of components. In addition, various types of devices for general purpose can be used. In some cases, it may be beneficial to build a dedicated device to perform the processing described here. That is, a part of the control device 30 described above may be realized by hardware using an electronic component such as an integrated circuit.
 さらに、上述の実施形態において、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。全ての構成が相互に接続されていても良い。 Furthermore, in the above-described embodiment, control lines and information lines are those that are considered necessary for the explanation, and not all control lines and information lines on the product are necessarily shown. All the components may be connected to each other.
1…被照射体、2…光源装置、3…光学システム、4…ステージ、5…センサ、6…照射光学系、7…結像光学系、8…マイクロレンズアレイ、10…測定装置、11…第1のレンズ、12…明るさ絞り、13…視野絞り、14…第2のレンズ、15…フィルタブロック、16…第1の対物レンズ、17…第1フィルタ、18…ダイクロイックミラー、19…第2フィルタ、20…測定装置本体、21…第2の対物レンズ、30…制御装置、31…信号列抽出部、32…信号列処理部、33…ステージ制御部、34…色収差情報格納部、40…表示装置、51…制御線、60…支持部材、81…マイクロレンズ、100…測定システム、101…前処理装置、102…搬送装置、103…測定装置 DESCRIPTION OF SYMBOLS 1 ... Object to be irradiated, 2 ... Light source device, 3 ... Optical system, 4 ... Stage, 5 ... Sensor, 6 ... Irradiation optical system, 7 ... Imaging optical system, 8 ... Micro lens array, 10 ... Measuring device, 11 ... 1st lens, 12 ... Brightness stop, 13 ... Field stop, 14 ... 2nd lens, 15 ... Filter block, 16 ... 1st objective lens, 17 ... 1st filter, 18 ... Dichroic mirror, 19 ... 1st 2 filters, 20 ... measuring device body, 21 ... second objective lens, 30 ... control device, 31 ... signal sequence extraction unit, 32 ... signal sequence processing unit, 33 ... stage control unit, 34 ... chromatic aberration information storage unit, 40 DESCRIPTION OF SYMBOLS ... Display apparatus, 51 ... Control line, 60 ... Support member, 81 ... Micro lens, 100 ... Measurement system, 101 ... Pre-processing apparatus, 102 ... Conveyance apparatus, 103 ... Measurement apparatus

Claims (25)

  1.  第1の励起光及び第2の励起光を被照射体に照射するための光源装置と、
     二次元状に配列された複数のマイクロレンズを備えるマイクロレンズアレイと、
     受光素子が二次元に配列されたセンサであって、前記第1の励起光を前記被照射体に照射したときに発した第1の蛍光と、前記第2の励起光を前記被照射体に照射したときに発した第2の蛍光とを前記マイクロレンズアレイを介して受光するセンサと、
     前記第1の蛍光の合焦点に対応する第1の画素信号の信号列から第1の結果を生成し、前記第2の蛍光の合焦点に対応する第2の画素信号の信号列から第2の結果を生成する制御部と、
    を備える測定装置。
    A light source device for irradiating the irradiated body with the first excitation light and the second excitation light;
    A microlens array comprising a plurality of microlenses arranged two-dimensionally;
    A sensor in which light receiving elements are arranged two-dimensionally, and the first fluorescence emitted when the first excitation light is irradiated to the irradiated body and the second excitation light to the irradiated body A sensor that receives the second fluorescence emitted when irradiated through the microlens array;
    A first result is generated from the signal sequence of the first pixel signal corresponding to the focal point of the first fluorescence, and a second result is generated from the signal sequence of the second pixel signal corresponding to the focal point of the second fluorescence. A control unit that generates the result of
    A measuring apparatus comprising:
  2.  請求項1に記載の測定装置において、
     前記制御部は、前記第1の蛍光の色収差量及び前記第2の蛍光の色収差量を用いて、複数の異なる焦点位置における画素信号の集合から、前記第1の画素信号の信号列及び前記第2の画素信号の信号列を取得することを特徴とする測定装置。
    The measuring apparatus according to claim 1,
    The control unit uses the first fluorescent chromatic aberration amount and the second fluorescent chromatic aberration amount to generate a signal string of the first pixel signal and the first pixel signal from a set of pixel signals at a plurality of different focal positions. 2. A measuring apparatus for acquiring a signal sequence of two pixel signals.
  3.  請求項1に記載の測定装置において、
     前記光源装置は、さらに、前記被照射体からの反射光を得るための参照光を放射可能であり、
     前記センサは、前記第1の蛍光と、前記第2の蛍光と、前記反射光とを受光することを特徴とする測定装置。
    The measuring apparatus according to claim 1,
    The light source device can further emit reference light for obtaining reflected light from the irradiated object,
    The sensor receives the first fluorescence, the second fluorescence, and the reflected light.
  4.  請求項3に記載の測定装置において、
     前記光源装置は、前記第1の励起光と前記参照光と前記第2の励起光とを異なる時間で放射し、
     前記制御部は、
     前記複数の異なる焦点位置における前記反射光の画素信号の集合から、参照焦点位置を判定し、
     前記参照焦点位置からの前記第1の蛍光の色収差量を用いて、前記複数の異なる焦点位置における第1の画素信号の集合から、前記第1の画素信号の信号列を取得し、
     前記参照焦点位置からの前記第2の蛍光の色収差量を用いて、前記複数の異なる焦点位置における第2の画素信号の集合から、前記第2の画素信号の信号列を取得することを特徴とする測定装置。
    The measuring device according to claim 3,
    The light source device emits the first excitation light, the reference light, and the second excitation light at different times,
    The controller is
    Determining a reference focal position from a set of pixel signals of the reflected light at the plurality of different focal positions;
    Using the amount of chromatic aberration of the first fluorescence from the reference focal position, obtaining a signal sequence of the first pixel signal from a set of first pixel signals at the plurality of different focal positions;
    A signal sequence of the second pixel signal is obtained from a set of second pixel signals at the plurality of different focal positions using the chromatic aberration amount of the second fluorescence from the reference focal position. Measuring device.
  5.  請求項4に記載の測定装置において、
     前記制御部は、前記反射光の画素信号の集合においてコントラストが最大となる焦点位置を、前記参照焦点位置として判定することを特徴とする測定装置。
    The measuring apparatus according to claim 4, wherein
    The control unit is configured to determine, as the reference focal position, a focal position at which contrast is maximized in a set of pixel signals of the reflected light.
  6.  請求項5に記載の測定装置において、
     前記制御部は、前記反射光の画素信号の集合における前記複数の異なる焦点位置の各々の前記被照射体の領域を複数の領域に分けて、前記複数の領域の少なくとも1つで前記コントラストを算出することを特徴とする測定装置。
    The measuring apparatus according to claim 5, wherein
    The control unit divides a region of the irradiated object at each of the plurality of different focal positions in the set of pixel signals of the reflected light into a plurality of regions, and calculates the contrast in at least one of the plurality of regions. A measuring apparatus characterized by:
  7.  請求項4から6のいずれか一項に記載の測定装置において、
     前記参照光の照射期間は、前記第1の励起光の照射期間と前記第2の励起光の照射期間との間にあることを特徴とする測定装置。
    In the measuring device according to any one of claims 4 to 6,
    The measuring apparatus, wherein the reference light irradiation period is between the first excitation light irradiation period and the second excitation light irradiation period.
  8.  請求項4から7のいずれか一項に記載の測定装置において、
     前記光源装置は、前記第1の励起光の照射期間の前後に前記参照光を照射し、かつ、前記第2の励起光の照射期間の前後に前記参照光を照射し、
     前記制御部は、複数の前記参照光の照射により求められた複数の前記参照焦点位置を比較することを特徴とする測定装置。
    In the measuring device according to any one of claims 4 to 7,
    The light source device irradiates the reference light before and after the irradiation period of the first excitation light, and irradiates the reference light before and after the irradiation period of the second excitation light,
    The control unit compares a plurality of the reference focal positions obtained by irradiating a plurality of the reference lights.
  9.  請求項4から8のいずれか一項に記載の測定装置において、
     前記第1の画素信号の集合は、前記第1の蛍光が出ているであろう時間帯の間に取得された画素信号の集合であり、
     前記第2の画素信号の集合は、前記第2の蛍光が出ているであろう時間帯の間に取得された画素信号の集合であることを特徴とする測定装置。
    In the measuring device according to any one of claims 4 to 8,
    The first set of pixel signals is a set of pixel signals acquired during a time period in which the first fluorescence will be emitted;
    2. The measuring apparatus according to claim 1, wherein the second set of pixel signals is a set of pixel signals acquired during a time period in which the second fluorescence is expected to be emitted.
  10.  請求項3に記載の測定装置において、
     前記光源装置は、前記第1の励起光と前記参照光と前記第2の励起光とを共に放射し、
     前記制御部は、
     複数の異なる焦点位置における画素信号の集合から、参照焦点位置を判定し、
     前記参照焦点位置からの前記第1の蛍光の色収差量を用いて、前記画素信号の集合から前記第1の画素信号の信号列を取得し、
     前記参照焦点位置からの前記第2の蛍光の色収差量を用いて、前記画素信号の集合から前記第2の画素信号の信号列を取得することを特徴とする測定装置。
    The measuring device according to claim 3,
    The light source device emits both the first excitation light, the reference light, and the second excitation light,
    The controller is
    Determining a reference focus position from a set of pixel signals at a plurality of different focus positions;
    Using the amount of chromatic aberration of the first fluorescence from the reference focal position, obtaining a signal sequence of the first pixel signal from the set of pixel signals,
    A measurement apparatus characterized in that a signal sequence of the second pixel signal is obtained from the set of pixel signals by using a chromatic aberration amount of the second fluorescence from the reference focal position.
  11.  請求項10に記載の測定装置において、
     前記被照射体と前記マイクロレンズアレイとの間に配置され、前記第1の蛍光と前記第2の蛍光とを分離する分離部をさらに備え、
     前記マイクロレンズアレイが、第1のマイクロレンズアレイと、第2のマイクロレンズアレイを含み、
     前記センサが、前記第1のマイクロレンズアレイを介して前記第1の蛍光を受光する第1のセンサと、前記第2のマイクロレンズアレイを介して前記第2の蛍光を受光する第2のセンサとを含むことを特徴とする測定装置。
    The measuring device according to claim 10,
    A separation unit that is disposed between the irradiated body and the microlens array and separates the first fluorescence and the second fluorescence;
    The microlens array includes a first microlens array and a second microlens array;
    A first sensor that receives the first fluorescence via the first microlens array, and a second sensor that receives the second fluorescence via the second microlens array. And a measuring device.
  12.  請求項11に記載の測定装置において、
     前記第1のセンサ及び前記第2のセンサは、前記反射光をさらに受光することを特徴とする測定装置。
    The measuring device according to claim 11,
    The measuring device, wherein the first sensor and the second sensor further receive the reflected light.
  13.  請求項4から12のいずれか一項に記載の測定装置において、
     前記制御部は、
     前記複数の異なる焦点位置の各々の前記被照射体の領域を複数の領域に分けて、前記複数の領域のうち少なくとも3つの領域で前記参照焦点位置を判定し、
     前記複数の領域の各々において、前記参照焦点位置からの前記第1の蛍光の色収差量を用いて、前記第1の画素信号の信号列を取得し、
     前記複数の領域の各々において、前記参照焦点位置からの前記第2の蛍光の色収差量を用いて、前記第2の画素信号の信号列を取得することを特徴とする測定装置。
    In the measuring device according to any one of claims 4 to 12,
    The controller is
    Dividing the area of the irradiated body at each of the plurality of different focal positions into a plurality of areas, and determining the reference focal position in at least three of the plurality of areas;
    In each of the plurality of regions, using the amount of chromatic aberration of the first fluorescence from the reference focal position, obtain a signal sequence of the first pixel signal,
    In each of the plurality of regions, the measurement apparatus is configured to acquire a signal string of the second pixel signal by using a chromatic aberration amount of the second fluorescence from the reference focal position.
  14.  請求項13に記載の測定装置において、
     前記制御部は、前記少なくとも3つの領域の各々において、コントラストが最大となる焦点位置を、前記参照焦点位置として判定することを特徴とする測定装置。
    The measuring device according to claim 13, wherein
    The control device determines a focal position where the contrast is maximum in each of the at least three regions as the reference focal position.
  15.  請求項13又は14に記載の測定装置において、
     前記制御部は、前記少なくとも3つの領域で判定された前記参照焦点位置を用いて、前記被照射体の前記複数の領域の全てについて前記参照焦点位置を判定することを特徴とする測定装置。
    The measuring device according to claim 13 or 14,
    The control unit determines the reference focal position for all of the plurality of regions of the irradiated object using the reference focal position determined in the at least three regions.
  16.  請求項3から15のいずれか一項に記載の測定装置において、
     複数の前記被照射体が配置された支持部材を支持するステージをさらに備え、
     前記制御部は、前記反射光の画素信号の集合の中に所定のしきい値よりも大きい信号値の画素信号がない場合、前記ステージの高さを変更することを特徴とする測定装置。
    In the measuring device according to any one of claims 3 to 15,
    Further comprising a stage for supporting a support member on which a plurality of the irradiated objects are disposed;
    The control unit changes the height of the stage when there is no pixel signal having a signal value larger than a predetermined threshold in the set of pixel signals of the reflected light.
  17.  請求項13から15のいずれか一項に記載の測定装置において、
     複数の前記被照射体が配置された支持部材を支持するステージをさらに備え、
     前記制御部は、前記少なくとも3つの領域で判定された前記参照焦点位置に対応する画素信号の信号値を用いて、前記ステージが傾斜しているかを判定することを特徴とする測定装置。
    In the measuring device according to any one of claims 13 to 15,
    Further comprising a stage for supporting a support member on which a plurality of the irradiated objects are disposed;
    The measurement apparatus, wherein the control unit determines whether the stage is tilted using a signal value of a pixel signal corresponding to the reference focal position determined in the at least three regions.
  18.  請求項17に記載の測定装置において、
     前記制御部は、前記ステージが傾斜していると判定した場合、前記ステージの傾斜を変更することを特徴とする測定装置。
    The measuring device according to claim 17,
    When the control unit determines that the stage is tilted, the control unit changes the tilt of the stage.
  19.  請求項13から15のいずれか一項に記載の測定装置において、
     前記制御部は、
     前記少なくとも3つの領域の各々で判定された前記参照焦点位置から、前記被照射体の傾きを算出し、
     前記傾きに応じて、前記被照射体の前記複数の領域の全てについて前記参照焦点位置を判定することを特徴とする測定装置。
    In the measuring device according to any one of claims 13 to 15,
    The controller is
    From the reference focal position determined in each of the at least three regions, the inclination of the irradiated object is calculated,
    The measurement apparatus characterized by determining the reference focal position for all of the plurality of regions of the irradiated object according to the inclination.
  20.  請求項19に記載の測定装置において、
     複数の前記被照射体が配置された支持部材を支持するステージをさらに備え、
     前記制御部は、前記傾きを表す値が所定の値よりも大きい場合、前記ステージの高さを変更することを特徴とする測定装置。
    The measuring device according to claim 19,
    Further comprising a stage for supporting a support member on which a plurality of the irradiated objects are disposed;
    The control unit is configured to change the height of the stage when a value representing the inclination is larger than a predetermined value.
  21.  請求項1から20のいずれか一項に記載の測定装置において、
     前記第1の結果は、前記第1の画素信号の信号値を表す情報と、前記第1の画素信号に対応する前記被照射体の位置情報とを含み、
     前記第2の結果は、前記第2の画素信号の信号値を表す情報と、前記第2の画素信号に対応する前記被照射体の位置情報とを含むことを特徴とする測定装置。
    The measuring apparatus according to any one of claims 1 to 20,
    The first result includes information indicating a signal value of the first pixel signal, and position information of the irradiated object corresponding to the first pixel signal,
    The measurement apparatus characterized in that the second result includes information indicating a signal value of the second pixel signal and position information of the irradiated object corresponding to the second pixel signal.
  22.  請求項1から21のいずれか一項に記載の測定装置において、
     前記第1の結果は、前記第1の画素信号の信号列から生成された第1の画像を含み、前記第2の結果は、前記第2の画素信号の信号列から生成された第2の画像を含むことを特徴とする測定装置。
    In the measuring device according to any one of claims 1 to 21,
    The first result includes a first image generated from the signal sequence of the first pixel signal, and the second result is a second image generated from the signal sequence of the second pixel signal. A measuring device comprising an image.
  23.  請求項1から22のいずれか一項に記載の測定装置と、
     支持部材に支持された前記被照射体を検体と反応させる反応装置と、
     前記支持部材を前記測定装置に搬送する搬送装置と、
    を備えることを特徴とする測定システム。
    A measuring device according to any one of claims 1 to 22,
    A reaction device for reacting the irradiated object supported by a support member with a specimen;
    A transport device for transporting the support member to the measuring device;
    A measurement system comprising:
  24.  複数の被照射体が配置された支持部材をステージに配置するステップと、
     光源装置によって、第1の励起光及び第2の励起光を前記被照射体に照射するステップと、
     前記第1の励起光を前記被照射体に照射したときに発した第1の蛍光と、前記第2の励起光を前記被照射体に照射したときに発した第2の蛍光とを、二次元状に配列された複数のマイクロレンズを備えるマイクロレンズアレイを介して、受光素子が二次元に配列されたセンサで受光するステップと、
     前記第1の蛍光の合焦点に対応する第1の画素信号の信号列から第1の結果を生成し、前記第2の蛍光の合焦点に対応する第2の画素信号の信号列から第2の結果を生成するステップと、
    を含む信号列処理方法。
    Arranging a support member on which a plurality of irradiated objects are arranged on a stage;
    Irradiating the irradiated body with first excitation light and second excitation light by a light source device;
    The first fluorescence emitted when the irradiated body is irradiated with the first excitation light and the second fluorescence emitted when the irradiated body is irradiated with the second excitation light, Receiving light by a sensor in which a light receiving element is two-dimensionally arranged through a microlens array including a plurality of microlenses arranged in a dimension;
    A first result is generated from the signal sequence of the first pixel signal corresponding to the focal point of the first fluorescence, and a second result is generated from the signal sequence of the second pixel signal corresponding to the focal point of the second fluorescence. Generating a result of
    A signal sequence processing method including:
  25.  演算部及び記憶部を少なくとも備える情報処理装置に、マイクロレンズアレイを介して受光した光の信号列を処理させるためのプログラムであって、前記光の信号列は、複数の異なる焦点位置における画素信号の集合であって、第1の蛍光が出ているであろう時間帯の間に取得された画素信号と第2の蛍光が出ているであろう時間帯の間に取得された画素信号とを含み、
     前記演算部に、
     前記複数の異なる焦点位置における画素信号の集合から、前記第1の蛍光の合焦点に対応する第1の画素信号の信号列及び前記第2の蛍光の合焦点に対応する第2の画素信号の信号列を取得する処理と、
     前記第1の画素信号の信号列から第1の結果を生成し、前記第2の画素信号の信号列から第2の結果を生成する処理と、
    を実行させるためのプログラム。
    A program for causing an information processing apparatus including at least a calculation unit and a storage unit to process a signal sequence of light received through a microlens array, wherein the signal sequence of light includes pixel signals at a plurality of different focal positions. A pixel signal acquired during a time period during which the first fluorescence will be emitted and a pixel signal acquired during a time period during which the second fluorescence will be emitted. Including
    In the calculation unit,
    From a set of pixel signals at the plurality of different focal positions, a signal sequence of a first pixel signal corresponding to the focal point of the first fluorescence and a second pixel signal corresponding to the focal point of the second fluorescence. Processing to acquire a signal sequence;
    Generating a first result from the signal sequence of the first pixel signal and generating a second result from the signal sequence of the second pixel signal;
    A program for running
PCT/JP2015/060280 2015-03-31 2015-03-31 Measurement apparatus, measurement system, signal string processing method, and program WO2016157458A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/060280 WO2016157458A1 (en) 2015-03-31 2015-03-31 Measurement apparatus, measurement system, signal string processing method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/060280 WO2016157458A1 (en) 2015-03-31 2015-03-31 Measurement apparatus, measurement system, signal string processing method, and program

Publications (1)

Publication Number Publication Date
WO2016157458A1 true WO2016157458A1 (en) 2016-10-06

Family

ID=57004182

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/060280 WO2016157458A1 (en) 2015-03-31 2015-03-31 Measurement apparatus, measurement system, signal string processing method, and program

Country Status (1)

Country Link
WO (1) WO2016157458A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210199587A1 (en) * 2019-12-31 2021-07-01 Illumina, Inc. Autofocus functionality in optical sample analysis.

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003067230A1 (en) * 2002-02-07 2003-08-14 Fuji Electric Holdings Co.,Ltd. Fluorescent image measuring method and device
JP2006301541A (en) * 2005-04-25 2006-11-02 Olympus Corp Scanning type fluorescence observation apparatus
JP2009036764A (en) * 2007-07-10 2009-02-19 National Institute Of Advanced Industrial & Technology Two-wavelength simultaneous observation optical device
JP2010054320A (en) * 2008-08-28 2010-03-11 Nikon Corp Shape measuring apparatus, method, and program
JP2011109310A (en) * 2009-11-16 2011-06-02 Nikon Corp Image synthesizing device and imaging device
JP2013105177A (en) * 2011-11-11 2013-05-30 Leica Microsystems Cms Gmbh Microscopic device and method for three-dimensional localization of punctiform object in sample
JP2014115448A (en) * 2012-12-10 2014-06-26 Olympus Corp Microscope

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003067230A1 (en) * 2002-02-07 2003-08-14 Fuji Electric Holdings Co.,Ltd. Fluorescent image measuring method and device
JP2006301541A (en) * 2005-04-25 2006-11-02 Olympus Corp Scanning type fluorescence observation apparatus
JP2009036764A (en) * 2007-07-10 2009-02-19 National Institute Of Advanced Industrial & Technology Two-wavelength simultaneous observation optical device
JP2010054320A (en) * 2008-08-28 2010-03-11 Nikon Corp Shape measuring apparatus, method, and program
JP2011109310A (en) * 2009-11-16 2011-06-02 Nikon Corp Image synthesizing device and imaging device
JP2013105177A (en) * 2011-11-11 2013-05-30 Leica Microsystems Cms Gmbh Microscopic device and method for three-dimensional localization of punctiform object in sample
JP2014115448A (en) * 2012-12-10 2014-06-26 Olympus Corp Microscope

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210199587A1 (en) * 2019-12-31 2021-07-01 Illumina, Inc. Autofocus functionality in optical sample analysis.
US11815458B2 (en) * 2019-12-31 2023-11-14 Illumina, Inc. Autofocus functionality in optical sample analysis

Similar Documents

Publication Publication Date Title
KR101022769B1 (en) Optical dectecting apparatus for bio-chip
US6355934B1 (en) Imaging system for an optical scanner
JP7003241B2 (en) Paste slide judgment system
US8686376B2 (en) Microarray characterization system and method
EP1223421B1 (en) Biochip reader
JP2013092393A (en) Chemical sensor, biomolecule detection device, and biomolecule detection method
WO2015087824A1 (en) Optical apparatus, measuring apparatus, measuring method, screening apparatus, and screening method
JP2022084889A (en) Slide inventory and reinsertion system
US7645971B2 (en) Image scanning apparatus and method
JP2007003323A (en) Photographing system
US6870166B2 (en) Maximum sensitivity optical scanning system
WO2016157458A1 (en) Measurement apparatus, measurement system, signal string processing method, and program
US8742384B2 (en) Optical illumination apparatus and method having a reflective arrangement with moveable components for adjusting incident light
US6919201B2 (en) Biochip measuring method and measuring equipment
US7042565B2 (en) Fluorescent microarray analyzer
JP2003028798A (en) Fluorescence acquisition device
JP2004184379A (en) Method of reading microarray
AU2020206325A1 (en) Laser line illumination using combined single-mode and multi-mode laser sources
JP5164713B2 (en) Detection apparatus and method
WO2021117153A1 (en) Fluorescence detection device and fluorescence detection method
JP2008039605A (en) Fluorescence detector
JP2005181014A (en) Image reading apparatus and image reading method
JP2007232613A (en) Fluorescence detection device
JP2006030304A (en) Focus detector for microscope
Guse et al. Sophisticated lenses for microarray analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15887610

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15887610

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP