WO2020020148A1 - 成像方法、装置及系统 - Google Patents

成像方法、装置及系统 Download PDF

Info

Publication number
WO2020020148A1
WO2020020148A1 PCT/CN2019/097272 CN2019097272W WO2020020148A1 WO 2020020148 A1 WO2020020148 A1 WO 2020020148A1 CN 2019097272 W CN2019097272 W CN 2019097272W WO 2020020148 A1 WO2020020148 A1 WO 2020020148A1
Authority
WO
WIPO (PCT)
Prior art keywords
lens
image
value
light
light intensity
Prior art date
Application number
PCT/CN2019/097272
Other languages
English (en)
French (fr)
Inventor
李林森
孙瑞涛
徐家宏
周志良
姜泽飞
颜钦
Original Assignee
深圳市真迈生物科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810813660.XA external-priority patent/CN112291469A/zh
Priority claimed from CN201810814359.0A external-priority patent/CN112333378A/zh
Application filed by 深圳市真迈生物科技有限公司 filed Critical 深圳市真迈生物科技有限公司
Priority to EP19841635.6A priority Critical patent/EP3829158A4/en
Priority to US17/262,663 priority patent/US11368614B2/en
Publication of WO2020020148A1 publication Critical patent/WO2020020148A1/zh
Priority to US17/746,838 priority patent/US11575823B2/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • G02B7/38Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals measured at different points on the optical axis, e.g. focussing on two or more planes and comparing image data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • G02B21/242Devices for focusing with coarse and fine adjustment mechanism
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • G02B21/244Devices for focusing using image analysis techniques
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • G02B21/245Devices for focusing using auxiliary sources, detectors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/671Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the invention relates to the field of optical detection, in particular to an imaging method, device and system.
  • the camera quickly adjusts the focal length to obtain the sharpest focal plane each time when shooting, so as to obtain a clear picture. This process is called focus tracking.
  • the subject fails to follow the focus due to other stray light or dust or scratches on the surface. If the focus fails, if the camera cannot refocus, the image will be blurred.
  • the camera is used for sequence determination, if the object is a nucleic acid molecule located in the chip, the liquid inside the imaging chip contains air bubbles, large groups of fluorescent impurities, or dust and scratches on the chip surface, etc., it is easy to cause the camera Focus tracking failed.
  • Sequencing platforms that obtain nucleic acid information based on imaging include the process of photographing nucleic acids placed in a reactor using an imaging system.
  • the reactor is also called a chip (Flowcell).
  • the chip may include one or more parallel channels. The channels are used to enter and carry reagents to form the environment required for the sequencing reaction.
  • the chip can be made of two pieces of glass bonded.
  • the sequencing process includes multiple rounds of photography on a fixed area of the chip. The area of each shot can be called FOV (field of view), and each round of photography can be called a Cycle, which involves re-passing reagents between two cycles for chemical reactions.
  • FOV field of view
  • the camera can successfully focus most of the time, that is, find the clearest focal plane position.
  • focus tracking fails.
  • Figures 1-3 show the data of successful and abnormal or unsuccessful focus tracking in the inventor's experiments.
  • the abscissa is the serial number of the FOV
  • the first half of the FOV is shot sequentially from the left to the right of the Flowcell.
  • the second half of the FOV is taken from right to left after wrapping.
  • the ordinate is the height of the microscope objective from the camera, that is, the Z value, in ⁇ m.
  • a negative value indicates that the microscope objective is located below the camera. The larger the absolute value of the Z value, the farther the objective is from the camera.
  • Figure 1 shows the Z-value curve corresponding to 300 FOV images after successful focus tracking
  • Figure 2 shows the Z-value curve corresponding to a partial focus tracking abnormality (reflected as a partial Z-value abnormality) in 200 FOV images.
  • the abnormal part of the curve that is, the image corresponding to the convex part is an unclear / blurred image.
  • the inventors analyzed a large amount of data on the success of focusing and abnormal focusing, and found that with the objective lens fixed, multiple Z-value curves corresponding to multiple FOVs with the same normal focusing in different cycles (that is, at different times) appear. Out of a certain pattern. As shown in Figure 4, it shows the Z-value curve corresponding to a clear picture of 300 FOVs in four different cycles.
  • the same position may have different focal planes in different cycles, but the relative positions of the focal planes are basically unchanged compared to other FOVs in the same cycle. That is, at the physical location, the focal planes between different FOVs of the same cycle are related.
  • the inventor has developed a set of algorithms that can enable the camera to have a focal plane prediction function without the assistance of hardware replacement.
  • a relationship (such as a first predetermined relationship) is obtained by linear fitting, and the relationship is used to predict the focal plane positions of other FOVs in the row.
  • the basic offset b can be calculated from the overall focal plane difference (for example, from one end of the track to the other end of the track).
  • cyc1FOVZ (r) and cyc1FOVZ (l) are obtained by focusing
  • cyc1FOVZ (r ) And cyc1FOVZ (l) represent the focal plane position Z values of two objects (which can be called two positions or two FOVs) at one end and the other end of one track in cycle1
  • the intercept b (cyc1FOVZ (r ) –Cyc1FOVZ (l)) / FOVNum, where FOVNum is the number of FOVs between the two positions of cyc1FOVZ (r) and cyc1FOVZ (l).
  • cyc1FOVZ (n) and cyc1FOVZ (n + 1) represent two adjacent positions (FOV) and cyc1FOVZ (n + 1) is relatively closer to cyc1FOVZ (r), and cyc1FOVZ (n) can be obtained by focusing.
  • the determination of b can be based on the focal plane information of two FOVs on the same track.
  • the determined formula (b) and the focal plane coordinate information of any focused FOV can also be used.
  • the determined relationship (b) and the determined cyc1FOVZ (r) and cyc1FOVZ (l ) To determine cyc1FOVZ (n + 1).
  • the focal surface of any FOV of the current cycle can be predicted based on the determined linear relationship and the focal surface position of any FOV of the current cycle. position. For example, use the focal plane position of FOV (n) (the FOV of the Nth or Nth position) in the current cycle to predict the FOV (n + 1) (of the N + 1th or N + 1th position) in the same cycle FOV) focal position, we can take the Z value of the Nth FOV curFOVZ (n) as the dependent variable into the formula (b), and the obtained y is curFOVZ (n + 1).
  • the focal plane positions of the two FOVs in the cycle and the current cycle can also be determined based on the determined linear relationship.
  • the focal plane position of one of the same FOV is used to predict the focal plane position of the other of the same FOV of the current cycle.
  • formula (b) was determined in the previous cycle, using the focal plane position of FOV (n) in the current cycle (the FOV of the Nth or Nth position) to predict the FOV (n + 1) ( The focal plane position of FOV) at the N + 1th or N + 1th position, we can determine the focal plane positions of FOV (n) and FOV (n + 1) in the previous cycle by formula (b), respectively
  • first preset track may be a straight line or a curve. Any curve can be seen as a fit of multiple line segments.
  • the first preset track of the curve type is regarded as a set of line segments.
  • a first preset relationship including a set of linear relationships can be established to realize the focus plane position prediction of an object on the track without focusing.
  • the imaging method of this embodiment can be used to return the camera to the vicinity of the focal plane and start taking pictures.
  • the present invention provides an imaging method, an imaging device, an imaging system, and a sequencing system.
  • An imaging method uses an imaging system to image an object.
  • the imaging system includes a lens.
  • the object includes a first object, a second object, and a third object at different positions on a first preset track.
  • the imaging method includes: The lens and the first preset track are moved relative to each other according to a first predetermined relationship to obtain a clear image of the third object using the imaging system without focusing.
  • the first predetermined relationship passes the focal surface position of the first object and the focus of the second object. Face position is determined.
  • An imaging system is for imaging an object.
  • the imaging system includes a lens and a control device.
  • the object includes a first object, a second object, and a third object at different positions on a first preset track.
  • the control device is used for : The lens and the first preset track are moved relative to each other according to a first predetermined relationship to use the imaging system to obtain a clear image of the third object without focusing.
  • the first predetermined relationship is determined by the focal plane position of the first object and the second object.
  • the focal plane position is determined.
  • the first predetermined relationship is determined by the focus positions of the first object and the second object.
  • the focal plane can be directly performed according to the first predetermined relationship. It is predicted that a clear image of the third object can be obtained without focusing, which is particularly suitable for a situation where there are a large number of objects and it is desired to obtain images of these objects quickly and continuously.
  • This method has high imaging efficiency, and the imaging system fails to follow the focus. It can still accurately determine the focal plane position of subsequent objects, obtain image information of subsequent objects in continuous image acquisition, and use it with the focusing system built in the imaging system itself, which can save the tracking system that comes with the imaging system from failing to focus. Refocusing normally.
  • a sequencing apparatus includes the imaging system of the above-mentioned embodiment.
  • a computer-readable storage medium is configured to store a program for execution by a computer, and the execution of the program includes steps of completing the method of the foregoing embodiment.
  • the computer-readable storage medium may include: a read-only memory, a random access memory, a magnetic disk, or an optical disk.
  • An imaging system is used to image an object.
  • the imaging system includes a lens and a control device.
  • the object includes a first object, a second object, and a third object located at different positions on a first preset track.
  • the control device The computer executable program is included, and the execution of the computer executable program includes steps of completing the method of the foregoing embodiment.
  • a computer program product includes instructions. When the instructions are executed by a computer, the instructions cause the computer to execute the steps of the method in the foregoing embodiment.
  • FIG. 1 is a graph of Z-values corresponding to successful focus tracking during sequence determination.
  • FIG. 2 is a graph of the Z value corresponding to the focus failure of the abnormal convex portion FOV during sequence measurement.
  • FIG. 3 is a Z value curve chart that failed to follow the focus during sequence measurement and failed to refocus after the end of the cycle photographing after the interference disappeared.
  • FIG. 4 is a schematic diagram of different focus positions formed by the focus data of an object during sequential measurement.
  • FIG. 5 is a schematic structural diagram of a first preset track and a second preset track according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an in-focus position formed by the in-focus data of an object without interference during a sequence measurement.
  • FIG. 7 is a schematic diagram of an in-focus position formed by the in-focus data of the subject when the re-focusing is successfully performed with interference during sequence measurement.
  • FIG. 8 is a schematic diagram of an in-focus position formed by the in-focus data of an object when the focus cannot be re-focused during the sequence measurement.
  • FIG. 9 is a schematic flowchart of a focusing method according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a positional relationship between a lens and an object according to an embodiment of the present invention.
  • FIG. 11 is a partial structural diagram of an imaging system according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of a connected region of an image according to an embodiment of the present invention.
  • FIG. 13 is another schematic flowchart of a focusing method according to an embodiment of the present invention.
  • FIG. 14 is another schematic flowchart of a focusing method according to an embodiment of the present invention.
  • FIG. 15 is another schematic flowchart of a focusing method according to an embodiment of the present invention.
  • FIG. 16 is another schematic flowchart of a focusing method according to an embodiment of the present invention.
  • connection should be understood in a broad sense, for example, it may be a fixed connection, a detachable connection, or an integral connection; it may be a mechanical connection or a It is electrically connected or can communicate with each other. It can be directly connected or indirectly connected through an intermediate medium. It can be the internal connection of two elements or the interaction between two elements.
  • connection should be understood in a broad sense, for example, it may be a fixed connection, a detachable connection, or an integral connection; it may be a mechanical connection or a It is electrically connected or can communicate with each other. It can be directly connected or indirectly connected through an intermediate medium. It can be the internal connection of two elements or the interaction between two elements.
  • invariant which involves, for example, distance, object distance, and / or relative position
  • invariant can be expressed as a change in value, value range, or quantity, which can be absolutely constant or relatively constant. Relatively unchanged is to stay within a certain deviation range or a preset acceptable range. Unless otherwise stated, “invariant” involving distance, object distance and / or relative position is relatively constant.
  • the following disclosure provides multiple implementations or examples for implementing the technical solutions of the present invention.
  • the present invention may repeat reference numerals and / or reference letters in different examples, and such repetition is for the purpose of simplicity and clarity, and does not itself indicate the relationship between the various embodiments and / or settings discussed.
  • sequence determination is the same as nucleic acid sequence determination, and includes DNA sequencing and / or RNA sequencing, including long-sequence sequencing and / or short-sequence sequencing.
  • sequence sequencing reaction is the same as the sequencing reaction.
  • An embodiment of the present invention provides an imaging method, which uses an imaging system to image an object.
  • the imaging system includes a lens 104, and the object includes a first object 42, a second object 44, and a third object 46 located at different positions of the first preset track 43.
  • the imaging method includes: making the lens 104 and the first preset track 43 are moved relative to each other according to a first predetermined relationship to obtain an image of the third object 46 using the imaging system without focusing.
  • the first predetermined relationship passes the focal plane position of the first object 42 and the second object 44
  • the focal plane position is determined.
  • the first predetermined relationship is determined by the focus positions of the first object 42 and the second object 44.
  • the focal plane can be directly performed according to the first predetermined relationship. It is predicted that a clear image of the third object can be obtained without focusing, which is particularly suitable for a situation where there are a large number of objects and it is desired to obtain images of these objects quickly and continuously.
  • This method has high imaging efficiency, and the imaging system fails to follow the focus. It can still accurately determine the focal plane position of subsequent objects, obtain image information of subsequent objects in continuous image acquisition, and use it with the focusing system built in the imaging system itself, which can save the tracking system that comes with the imaging system from failing to focus. Refocusing normally.
  • the first preset track 43 may be a linear track, and the first object 42 and the second object 44 are located at two positions of the linear track, for example, located at two ends of the linear track. It can be understood that The number of the third objects 46 may be multiple. The plurality of third objects 46 are sequentially arranged on the first preset track 43, and the third object 46 is located between the first object 42 and the second object 44. It is understood that in other examples, the third object 46 may be located at a position different from the positions of the first object 42 and the second object 44.
  • the first preset trajectory 43 may be a non-linear trajectory, such as a curved trajectory. The curved trajectory may be regarded as a fit of multiple line segments, and the first object, the second object, and the third object are located on the curve. The same line segments in the track.
  • the first predetermined relationship may be a linear relationship.
  • the first preset track 43 is one or more channels 52 of the chip 500 used in the sequence sequencing process
  • the imaged third object 46 is located in the channel 52.
  • One or more positions (FOV) when taking a picture, the lens and the first preset track 43 can be relatively moved along the first direction A, for example, the lens 104 is fixed, the lens 104 includes an optical axis OP, and the first preset track 43 Move in the direction perpendicular to the optical axis OP. It can be understood that, in some embodiments, the first preset track 43 can move in a direction parallel to the optical axis OP. The first preset track 43 can be moved according to the actual adjustment needs.
  • the imaging system includes a camera 108, and the lens 104 can be mounted on the camera 108.
  • the camera 108 collects light passing through the lens 104 for imaging.
  • moving the lens 104 relative to the first preset track 43 includes at least one of the following: fixing the lens 104 and moving the first preset track 43; fixing the first preset track 43 and moving the lens 104; Move the lens 104 and the first preset track 43 at the same time.
  • the moving manners of the lens 104 and the first preset track 43 are various and have strong adaptability, which increases the application range of the imaging method.
  • the first preset track 43 when the first preset track 43 is moved, the first preset track 43 can be placed on a stage, and the stage can drive the first preset track 43 and the object back and forth in a direction perpendicular to the optical axis OP of the lens 104 Pan to position one of the third objects 46 below the lens 104, so that the imaging system images the third object 46.
  • the lens 104 When moving the lens 104, the lens 104 can be mounted on a driving mechanism, and the driving mechanism can drive the lens 104 back and forth in a direction perpendicular to the optical axis OP of the lens 104 by electric or manual means, so that the lens 104 moves to one of the third Above the object 46, the imaging system images the object.
  • Moving the lens 104 and the first preset track 43 at the same time can be understood as that the lens 104 can be moved first, and then the first preset track 43 can be moved so that one of the third objects 46 is located below the lens 104; Set the track 43 and then move the lens 104 so that the lens 104 is located above one of the third objects 46. Alternatively, while moving the lens 104, move the first preset track 43 so that the lens is located above one of the third objects 46.
  • the determination of the first predetermined relationship includes: using the imaging system to focus on the first object 42 to determine the first coordinates; using the imaging system to focus on the second object 44 to determine the second coordinates; according to the first The coordinates and the second coordinates establish a first predetermined relationship, the first coordinates reflect the focal plane position of the first object 42, and the second coordinates reflect the focal plane position of the second object 44.
  • the first predetermined relationship can be determined in advance, and when performing imaging of other objects, according to the first predetermined relationship, a clear image of other objects can be obtained using the imaging system without focusing, simplifying the imaging method and improving the imaging method. effectiveness.
  • the imaged third object 46 is one or more positions of the chip 500 used for sequence determination
  • the first object 42, the second object 44, and the first object 42 may be located in the same channel of the chip 500.
  • the first object 42, the third object 46 and the second object 44 are sequentially arranged on the first preset track 43.
  • the first direction A is a left-to-right direction from the chip 500, that is, along the left-to-right direction A of the chip 500, the first object 42, the third object 46, and the second object 44 are sequentially Arranged on the first preset track 43.
  • the first object 42, the third object 46, and the second object 44 may be arranged on the first preset track 43 in other orders.
  • the first predetermined relationship when determining the first predetermined relationship, two objects can be selected on the first preset track 43: the first object 42 and the second object 44 for focusing, so as to obtain the focus positions of the two objects.
  • the first predetermined relationship may be determined by focusing the first object 42 and the second object 44 to obtain the focal plane coordinate data of the first object 42 and the second object 44.
  • the first object 42 and the second object 44 may be the start and end FOVs of the first preset track in a cycle (that is, the same time period), such as the two ends of the same line in the same channel. , As shown in Figure 5.
  • the third object 46 may be any one or more FOVs between the first object 42 and the second object 44. It can be understood that based on the above rules, the first object 42 and the second object 44 may also be FOVs in other positions, and the third object 46 does not need to be located between the first object 42 and the second object 44, and only needs to be determined based on two points.
  • the rule of a straight line select any two positions (objects) on the first preset track, obtain the focal plane position corresponding to each position, and obtain the corresponding focal point according to the focal plane position of each position.
  • a first predetermined relationship of a preset track 43 can be used by the imaging system to obtain an image of the third object without focusing.
  • a coordinate system can be established to digitize / quantify the relative positional relationship including the so-called focal plane position.
  • xy can be used to indicate the location of the first / second preset track.
  • the plane of the lens and the direction of the optical axis of the objective lens are used to establish a three-dimensional coordinate system.
  • the focal plane position of each position includes the focal plane Z value.
  • the mentioned cycle reflects the influence of time factor / image acquisition cycle.
  • a high-precision imaging system such as a microscope system with a 60x objective lens and a depth of field of 200 nm
  • the fluctuation caused by one or more back-and-forth mechanical movements of the platform of the preset orbit is likely to exceed the depth of field. Therefore, it is preferable to use the imaging method of the above or any of the following embodiments with higher accuracy.
  • the C5 curve in FIG. 6 to FIG. 8 is a Z-value curve (a focal plane line formed by an actual focus position) obtained by a camera's actual shooting result, and only the camera is used to follow the focus for shooting.
  • the C6 curve is a predicted Z value curve (a focal plane line formed by predicting a focus position).
  • Figure 6 shows the Z-value predictions of multiple FOVs of a cycle in the non-interfering state
  • Figures 7 and 8 show the Z-value predictions in the case of interference and defocusing. Without interference, Figure 7 The refocusing was successful. After defocusing in FIG. 8, the refocusing cannot be performed.
  • the object includes a fourth object 47 and a fifth object 48 located at different positions of the second preset track 45.
  • the imaging method includes: moving the lens 104 and the second preset track 45 according to a second predetermined relationship.
  • the second predetermined relationship is determined by the focal surface position of the fourth object 47 and the first predetermined relationship, and the second preset track 45 is different from the first preset track 43.
  • the second preset track 45 may be a track adjacent to the first preset track 43.
  • the second preset track 45 is a parallel channel adjacent to the first preset track 43
  • the second preset track 45 may be a linear track.
  • the fourth object 47 and the fifth object 48 are located at two positions of the linear track.
  • the fourth object 47 is located at one end of the linear track and the fifth object 48 is located in the middle of the linear track.
  • the number of the fifth object 48 may be multiple, and the plurality of fifth objects 48 are sequentially arranged on the second preset track 45, and the fifth object 48 is located at a position different from the fourth object 47.
  • the second preset track 45 may be a non-linear track, such as a curved track, and the curved track may be regarded as a fit of multiple line segments, and the fourth object 47 and the fifth object 48 are located in the The same line segments in a curved track.
  • the second predetermined relationship may be a linear relationship.
  • the second preset track 45 is one or more channels 52 of the chip 500 used in the sequence sequencing process
  • the imaged fifth object 48 is located in the channel 52.
  • the lens and the second preset track can be relatively moved in the second direction B.
  • the lens is fixed, the lens includes an optical axis, and the second preset track 45 is perpendicular to the light. Axis direction movement.
  • the second preset track 45 can move in a direction parallel to the optical axis OP.
  • the second preset track 45 can be moved according to the actual adjustment needs.
  • determining the second predetermined relationship includes: using an imaging system to focus the fourth object 47 to determine a fourth coordinate; and establishing a second predetermined relationship according to the first predetermined relationship and the fourth coordinate, and the fourth coordinate reflects Focal plane position of the fourth object 47.
  • the second predetermined relationship can be determined in advance, and when performing imaging of other objects, according to the second predetermined relationship, a clear image of other objects can be obtained using the imaging system without focusing, simplifying the imaging method and improving the imaging method. effectiveness.
  • the fourth object 47 and the plurality of fifth objects 48 can be located in the same channel of the chip 500.
  • the fourth object 47 and the plurality of fifth objects 48 are sequentially arranged on the second preset track 45.
  • the second direction B is the right-to-left direction of the chip 500, that is, in the right-to-left direction B of the chip 500, the fourth object 47 and the plurality of fifth objects 48 are sequentially in the second The preset tracks 45 are arranged.
  • the fourth object 47 and the fifth object 48 may be arranged on the second preset track 45 in other orders.
  • determination of the second predetermined relationship may refer to the foregoing explanation of the determination of the first predetermined relationship. To avoid redundancy, it is not described in detail here.
  • the imaging method includes: after acquiring an image of the third object 46, moving the lens 104 relative to the first preset track 43 and / or the second preset track 45 to utilize the imaging system without focusing. An image of the fifth object 48 is acquired. In this way, after the image of the third object 46 on the first preset track 43 is acquired, the image of the fifth object 48 on the second preset track 45 may be acquired, so as to implement imaging of objects of different preset tracks.
  • the third direction C that is, the direction perpendicular to the extending direction of the channel 52
  • the lens 104 and the chip 500 are moved relative to each other, so that the lens 104 is positioned above the fifth object 48, and then the imaging system is used to acquire an image of the fifth object 48 without focusing according to a second predetermined relationship.
  • the third direction C is perpendicular to the first direction A and the second direction B.
  • the first preset track 43 and the second preset track 45 are alternately spaced from top to bottom.
  • the lens 104 is moved relative to the chip 500 so that the lens 104 is positioned above the fifth object 48 of the first second preset track 45, and then the first first Images of one or more fifth objects 48 of the two preset tracks 45.
  • the lens 104 and the chip 500 are moved relative to each other, so that the lens 104 is positioned above the third object 46 of the second first preset track 43, and then the third object 46 of the second first preset track 43 is obtained. Images until clear image acquisition of all objects on the first preset track 43 and the second preset track 45 is completed.
  • the focal plane position (such as the Z value) of the object to be acquired is predicted based on the first predetermined relationship or the second predetermined relationship, when imaging other FOVs, it is performed without focusing, which improves Imaging efficiency and accuracy. Further, such a determination method is applied to the photographing process of the same area and similar areas, and can realize fast and continuous image acquisition of multiple objects. It is also possible to omit the focusing process in the continuous shooting process, so as to realize fast scanning shooting. Further, in conjunction with the camera's auto-focusing system, the use of focal surface prediction technology can obtain better picture quality, and can solve the situation that the camera cannot refocus after encountering defocusing.
  • the camera's use of focal plane prediction technology can make the camera have certain intelligence, assist the focusing process based on prior knowledge, can quickly implement the focusing process, or even omit the focusing process. Especially in the camera process, this kind of intelligence has more important extended applications.
  • the imaging system includes an imaging device 102 and a stage.
  • the imaging device 102 includes a lens 104 and a focusing module 106.
  • the lens 104 includes an optical axis OP.
  • the lens 104 can move in the direction of the optical axis OP.
  • the first preset The track 43 and / or the second preset track 45 are located on the carrier.
  • Focusing includes the following steps: (a) using the focusing module to emit light onto the object; (b) moving the lens to the first setting position; (c) moving the lens from the first setting position Move to the object with the first set step and determine whether the focus module receives the light reflected by the object; (d) When the focus module receives the light reflected by the object, move the lens from the current position to the second set position , The second setting position is within the first range, and the first range is a range including the current position that allows the lens to move in the direction of the optical axis; (e) making the lens from the second setting position to the second setting step Move, use an imaging device to obtain an image of the object at each step position, and the second set step size is smaller than the first set step size; (f) evaluate the image of the object and achieve focusing based on the obtained image evaluation result.
  • a clear imaging plane that is, a clear plane / clear plane
  • This method is particularly suitable for devices containing precise optical systems, such as optical inspection devices with high-power lenses, where it is not easy to find a clear plane. In this way, costs can be reduced.
  • the object is an object for which the focal plane position needs to be obtained. For example, if a first predetermined relationship needs to be determined, two objects may be selected in a first preset track, and the objects may be positioned in the first or the other at the same time.
  • the objects are multiple positions (FOV) of the sample 300 applied in the sequence determination.
  • the object to be focused may be As the first object or the second object
  • the object to be focused may be the fourth object or the fifth object.
  • the sample 300 includes a carrier device 200 and a sample 302 to be tested.
  • the sample 302 is a biomolecule, such as a nucleic acid.
  • the lens 104 is located above the carrier device 200.
  • the carrying device 200 has a front panel 202 and a rear panel (lower panel). Each panel has two surfaces.
  • the sample 302 to be tested is connected to the upper surface of the lower panel, that is, the sample to be tested 302 is located below the lower surface 204 of the front panel 202.
  • the imaging device 102 collects an image of the sample 302 to be tested
  • the sample 302 to be tested is the corresponding position (FOV) when the photo is taken, and the sample to be tested 302 is located under the front panel 202 of the carrying device 200 Below the surface 204, at the beginning of the focusing process, the movement of the lens 104 is to find the medium interface 204 where the sample 302 to be measured is located, so as to improve the success rate of the imaging device 102 in acquiring clear images.
  • the sample to be measured 302 is a solution
  • the front panel 202 of the carrier device 200 is glass
  • the medium interface 204 between the carrier device 200 and the sample to be measured 302 is the lower surface 204 of the front panel 202 of the carrier device 200. That is, the interface between glass and liquid.
  • the sample 302 to be collected by the imaging device 102 is located under the lower surface 204 of the front panel 202. At this time, the image collected by the imaging device 102 is used to determine the clear surface of the sample to be measured 302. This process It can be called focusing.
  • the thickness of the front panel 202 is 0.175 mm.
  • the carrier device 200 may be a glass slide, and the sample 302 to be tested is placed on the glass slide, or the sample 302 to be tested is sandwiched between two glass slides.
  • the carrier device 200 may be a reaction device, for example, a chip similar to a sandwich structure with a carrier panel above and below, and a sample 302 to be tested is disposed on the chip.
  • the imaging device 102 includes a microscope 107 and a camera 108, and the lens 104 includes an objective lens 110 and a camera lens 112 of the microscope.
  • the focusing module 106 can pass a dichroic beam splitter 114 (dichroic beam splitter 114). ) Is fixed with the camera lens 112, and the dichroic beam splitter 114 is located between the camera lens 112 and the objective lens 110.
  • the dichroic beam splitter 114 includes a dual C-mount splitter.
  • the dichroic beam splitter 114 can reflect the light emitted by the focusing module 106 to the objective lens 110 and can allow visible light to pass through and enter the camera 108 through the camera lens 112, as shown in FIG. 11.
  • the movement of the lens 104 is along the optical axis OP.
  • the movement of the lens 104 may refer to the movement of the objective lens 110, and the position of the lens 104 may refer to the position of the objective lens 110. In other embodiments, other lenses of the moving lens 104 may be selected to achieve focusing.
  • the microscope 107 also includes a tube lens 111 (tube lens) located between the objective lens 110 and the camera 108.
  • the stage can move the sample 200 (such as the XY plane) in a plane perpendicular to the optical axis OP (such as the Z axis) of the lens 104, and / or can drive the sample 300 along the optical axis OP of the lens 104. (Such as Z axis).
  • the plane on which the sample 300 moves by the stage is not perpendicular to the optical axis OP, that is, the angle between the moving plane of the sample and the XY plane is not 0, and the imaging method is still applicable.
  • the imaging device 102 can also drive the objective lens 110 to move in the direction of the optical axis OP of the lens 104 to perform focusing.
  • the imaging device 102 uses a driving member such as a stepping motor or a voice coil motor to drive the objective lens 110 to move.
  • the positions of the objective lens 110, the stage, and the sample 300 may be set on the negative axis of the Z axis, and the first set position may be the negative of the Z axis. Coordinate position on the axis. It can be understood that, in other embodiments, the relationship between the coordinate system and the camera and the objective lens 110 may also be adjusted according to the actual situation, which is not specifically limited herein.
  • the imaging device 102 includes a total internal reflection fluorescence microscope
  • the objective lens 110 has a magnification of 60 times
  • the first set step S1 0.01 mm.
  • the first setting step S1 is more suitable, because S1 is too large to cross the acceptable focus range, and S1 is too small to increase the time overhead.
  • the lens 104 is caused to continue to move toward the sample 300 and the object with the first set step.
  • the imaging system is applicable to a sequence determination system, or in other words, the sequence determination system includes an imaging system.
  • the first range includes the relative first interval and the second interval, and defines that the second interval is closer to the sample.
  • Step (e) includes: (i) When the second set position is located In the second interval, move the lens from the second set position to a direction away from the object, and use an imaging device to collect images of the subject at each step position; or (ii) when the second set position is in the first range, The lens is moved from the second set position to a direction closer to the object, and the image of the object is captured by the imaging device at each step position. In this way, the movement of the lens can be controlled according to the specific position of the second set position, and a required image can be quickly acquired.
  • the current position can be used as the origin oPos and the coordinate axis Z1 can be established along the optical axis direction of the lens.
  • the first interval is a positive interval and the second interval is a negative interval.
  • the range of the positive and negative intervals ⁇ rLen, that is, the first range is [oPos + rLen, oPos-rLen].
  • the second set position is in a negative interval and the second set position is (oPos–3 * r0). r0 represents the second set step size.
  • the imaging device starts image acquisition at (oPos–3 * r0) and moves away from the object.
  • the coordinate axis Z1 established in the above example coincides with the Z axis of FIG. 10, and the first range is located in the negative interval of the Z axis. This simplifies the control of the imaging method. For example, as long as the position relationship between the origin of the Z axis and the origin oPos is known, the correspondence between the position of the lens on the coordinate axis Z1 and the position on the Z axis can be known.
  • step (f) includes: comparing the image evaluation result with a preset condition, and if the image evaluation result meets the preset condition, saving the position of the lens 104 corresponding to the image; if the image evaluation result does not satisfy the preset condition, The lens 104 is moved to a third setting position, and the third setting position is located in another section in the first range that is different from the section in which the second setting position is located, that is, the reverse camera focusing is started.
  • the image evaluation results do not meet the preset conditions; moving the lens 104 to the third setting position is equivalent to moving the lens to step (e).
  • the starting position of part (ii) of step 2 is then used to perform inverse photographic focusing, that is, part (ii) of step (e) is performed. In this way, searching the focus position of the image within the first range effectively improves the efficiency of the imaging method.
  • the second setting position is in the negative interval (oPos-3 * r0), the lens moves upward from the second setting position, and the imaging device 102 performs image acquisition at each step position.
  • the lens 104 is moved to the third setting position in the positive section.
  • the third setting position is (oPos + 3 * r0), and then the imaging device 102 moves from (oPos + 3 * r0) starts image acquisition and moves towards the object, and achieves focus based on the obtained image evaluation results.
  • the current position of the lens 104 corresponding to the image is saved as the storage position, so that the imaging device 102 can output a clear image when the sequence determination reaction is performed for photographing.
  • the image evaluation result includes a first evaluation value and a second evaluation value
  • the second set step size includes a coarse step size and a fine step size
  • step (f) includes: the lens moves in a coarse step size until the corresponding The first evaluation value of the image at the position is not greater than the first threshold value
  • the second evaluation value of the image that the lens 104 continues to move to the corresponding position with the fine step size is the largest
  • the image corresponding to the image with the second evaluation value being the largest is stored.
  • the position of the lens 104 In this way, the coarse step length can make the lens 104 quickly approach the in-focus position, and the fine step size can ensure that the lens 104 can reach the in-focus position.
  • the position of the lens 104 corresponding to the image with the largest second evaluation value may be stored as the focus position.
  • a first evaluation value and a second evaluation value are calculated for the acquired images.
  • the object is provided with an optically detectable label, such as a fluorescent label.
  • fluorescent molecules can be excited to emit fluorescence under the irradiation of a specific wavelength of laser light.
  • the image collected by the imaging device 102 includes possible and fluorescent Bright spots corresponding to the molecule's location. It can be understood that when the lens 104 is in the in-focus position, the size of the bright spot corresponding to the position of the fluorescent molecules is small and the brightness is high in the captured image; when the lens 104 is in the out-of-focus position, the In the obtained image, the size of the bright spot corresponding to the position of the fluorescent molecule is large and the brightness is low.
  • the size of the bright spot and the intensity of the bright spot on the image are used to evaluate the image.
  • the first evaluation value is used to reflect the bright spot size of the image.
  • the first evaluation value is determined by counting the size of the connected area of the bright spot on the image, and defines a connection greater than the average pixel value of the image.
  • Pixels (connectivity) is a connected domain (connected component).
  • the first evaluation value may be determined, for example, by calculating the size of the corresponding connected domains of each bright spot, and taking the average value of the size of the connected domains of the bright spots to represent a characteristic of the image as the first evaluation value of the image;
  • the connected domain size corresponding to each bright spot can be sorted from small to large, and the connected domain size of the 50th, 60th, 70th, 80th, or 90th quantile is taken as the first evaluation value of the image.
  • the center of the corresponding matrix is the size of the connected domain in the center of the column.
  • the matrix corresponding to the bright spots is defined as a matrix k1 * k2 composed of odd rows and odd columns, and contains k1 * k2 pixels.
  • the image is binarized, and the image is converted into a digital matrix before the size of the connected domain is calculated. For example, taking the average pixel value of the image as a reference, pixels that are not less than the average pixel value are recorded as 1, and pixels that are less than the average pixel value are labeled as 0, as shown in FIG. 12.
  • the center of the matrix corresponding to the bright spot is shown in bold and enlarged, and the 3 * 3 matrix is shown in a thick line frame.
  • the so-called first threshold can be set according to experience or prior data.
  • the first evaluation value reflects the size of the bright spot on the image. The inventor observed that the area of the connected area becomes smaller and then becomes larger during the process from approaching the clear surface to distant the clear surface. The magnitude and change law of the Area value during the focusing process of the clear surface determines the first threshold.
  • the first threshold is set to 260. It should be pointed out that the first threshold may have a correlation with the coarse step size and the fine step size setting: the numerical value of the first threshold value can cross the focal plane when the imaging device images the object without taking a coarse step size.
  • the second evaluation value or the third evaluation value is determined by counting the scores of the bright spots of the image.
  • CV represents the central pixel value of the matrix corresponding to the bright spot
  • EV represents the sum of the non-center pixel values of the matrix corresponding to the bright spot.
  • the Score values of all the bright spots of the image can be sorted in ascending order.
  • the preset number is 30 and the number of bright spots is 50.
  • the second evaluation value can be a score value of 50, 60, 70, 80, or 90 quantiles. 50%, 60%, 70%, 80%, or 90% of the relatively poor quality of bright spots interference; generally, it is considered that the center and edge intensity / pixel value difference is large and the focused bright spots are considered to be related to the molecules to be detected Corresponding bright spots.
  • the molecule to be detected may refer to a nucleic acid molecule corresponding to a target detection object during nucleic acid detection.
  • the number of bright spots is less than the preset number, for example, the number of bright spots is 10 and less than the preset number, so that the number of bright spots is small and not statistically significant
  • the bright spot with the highest score value is used to represent the image, that is, one hundred points
  • the digit Score value is the third evaluation value.
  • the image evaluation result includes a first evaluation value, a second evaluation value, and a third evaluation value
  • the image includes multiple pixels.
  • the preset condition is that the number of bright spots on the image is greater than the preset value, and the corresponding position
  • the first evaluation value of the image at is not greater than the first threshold
  • the second evaluation value of the image at the corresponding position is the largest among the second evaluation values of the N images before and after the image at the corresponding position; or the preset condition is,
  • the number of bright spots on the image is less than the preset value, the first evaluation value of the image at the corresponding position is not greater than the first threshold
  • the third evaluation value of the image at the corresponding position is the third evaluation of each of the N images before and after the current image
  • the value is the largest. In this way, different evaluation values are used for evaluation according to the number of bright spots of the image, so that the focus of the imaging method is more accurate.
  • the first evaluation value may be a size of a connected domain corresponding to a bright spot of an image in the foregoing embodiment.
  • different score quantiles are taken based on whether the number of bright spots has statistical significance or not, for example, the score values and non-hundredth percentiles may be respectively The score value of the number of digits.
  • single-molecule sequencing is performed.
  • the bright spots on the acquired images may come from one or several optically detectable labeled molecules carried by the sample to be tested, or they may come from other interference.
  • bright spots are detected, and bright spots corresponding to / from the labeled molecules are detected.
  • a k1 * k2 matrix may be used to detect bright spots.
  • the following methods are used to detect bright spots on an image:
  • k1 * k2 matrix to detect bright spots on the image, including determining whether the center pixel value of the matrix is not less than any non-matrix pixel value corresponding to a bright spot, k1 and k2 are odd numbers greater than 1, k1 * k2 pixels.
  • the method can easily and quickly detect the information from the signal of the labeled molecule.
  • the central pixel value of the matrix is greater than the first preset value, and any pixel value of the non-center of the matrix is greater than the second preset value.
  • the first preset value and the second preset value can be set according to experience or a certain amount of normal bright spot pixel / intensity data.
  • the so-called “normal image” and “normal bright spot” can be imaging systems.
  • the image obtained at the clear surface position looks normal to the naked eye, such as the image looks clear, the background is clean, and the size and brightness of each bright spot are relatively uniform.
  • the first preset value and the second preset value are related to an average pixel value of the image. For example, setting the first preset value to 1.4 times the average pixel value of the image and the second preset value to 1.1 times the average pixel value of the image can eliminate interference and obtain bright spot detection results from the marker.
  • the image is a color image, and one pixel point of the color image has three pixel values.
  • the color image can be converted into a grayscale image, and then image detection is performed to reduce the calculation amount and complexity of the image detection process. degree. You can choose, but are not limited to, converting non-grayscale images to grayscale images using floating-point algorithms, integer methods, shifting methods, or average methods. Of course, you can also directly detect the color image.
  • the comparison of the pixel values mentioned above can be regarded as the comparison of the three-dimensional values or the size of an array with three elements.
  • the relative sizes of multiple multi-dimensional values can be customized according to experience and needs, such as When any two-dimensional value in the three-dimensional value a is larger than the corresponding dimension of the three-dimensional value b, the three-dimensional value a can be considered to be greater than the three-dimensional value b.
  • the image is a grayscale image
  • the pixel values of the grayscale image are the same as the grayscale value. Therefore, the average pixel value of the image is the average gray value of the image.
  • the first threshold is 260
  • the third evaluation value of the image at the corresponding position is counted, and the position of the image with the largest third evaluation value is the clear surface position, and There are two positions before and after this position that satisfy the following conditions: the third evaluation value of the corresponding image is greater than zero.
  • EV is the sum of the non-center 8 pixel values.
  • the lens is moved to the next image acquisition area (FOV) of the object in a direction perpendicular to the optical axis for focusing.
  • FOV image acquisition area
  • the imaging method further includes: when the number of the unsuccessful current objects is greater than a preset number, prompting that the focusing fails.
  • a preset number is three, that is, when the number of current objects that have not been successfully focused is greater than three, it is prompted that the focusing has failed.
  • the method of prompting the focus failure may be a method of displaying images, text, and playing sounds.
  • the imaging method further includes: determining whether the position of the lens exceeds the first range, and exiting the focus when the position of the lens exceeds the first range. In this way, exiting the focus when the position of the lens exceeds the first range can avoid excessive focusing time and increase power consumption.
  • the first range is [oPos + rLen, oPos-rLen].
  • the first setting position and the fourth setting position can limit the movement range (first range) of the lens 104, which can stop the lens 104 from moving when it cannot focus successfully, avoiding waste of resources or damage to the equipment, or can The lens 104 is refocused when the focus cannot be successfully achieved, which improves the automation of the imaging method.
  • the settings are adjusted so that the moving range of the lens 104 is as small as possible under the condition that the solution can be implemented.
  • the movement range of the lens 104 can be set to 200 ⁇ m ⁇ 10 ⁇ m or [190 ⁇ m, 250 ⁇ m].
  • another setting position may be determined according to a predetermined moving range and the setting of any one of the fourth setting position and the first setting position.
  • the fourth setting position is set to the position at the lowest position of the upper surface 205 of the front panel 202 of the reaction device 200 and then to the next depth of field position.
  • the movement range of the lens 104 is set to 250 ⁇ m.
  • the location is determined.
  • the coordinate position corresponding to the next depth-of-field position is a position that becomes smaller along the negative direction of the Z axis.
  • the moving range is a section on the negative axis of the Z axis.
  • the first set position is nearlimit
  • the fourth set position is farlimit
  • the size of the movement range defined between nearlimit and farlimit is 350 ⁇ m. Therefore, when the coordinate position corresponding to the current position of the lens 104 is smaller than the coordinate position corresponding to the fourth set position, it is determined that the current position of the lens 104 exceeds the fourth set position.
  • the position of the farlimit is the position of the next depth of field L at the lowest position of the upper surface 205 of the front panel 202 of the reaction device 200.
  • the depth of field L is the depth of field of the lens 104.
  • the coordinate position corresponding to the first setting position and / or the fourth setting position may be specifically set according to an actual situation, and is not specifically limited herein.
  • the focusing module 106 includes a light source 116 and a light sensor 118.
  • the light source 116 is configured to emit light onto an object
  • the light sensor 118 is configured to receive light reflected by the object. In this way, light emission and light reception of the focusing module 106 can be achieved.
  • the light source 116 may be an infrared light source 116 and the light sensor 118 may be a photodiode.
  • the infrared light emitted by the light source 116 enters the objective lens 110 after being reflected by the dichroic beam splitter, and is projected onto the sample 300 and the object through the objective lens 110.
  • the object may reflect infrared light projected through the objective lens 110.
  • the sample 300 includes the carrying device 200 and the sample 302 to be measured
  • the light reflected by the received object is light reflected by the lower surface 204 of the front panel of the carrying device 200.
  • the distance between the objective lens 110 and the object is within a proper range of optical imaging and can be used for imaging by the imaging device 102. In one example, the distance is 20-40 ⁇ m.
  • the lens 104 is moved by a second set step smaller than the first set step, so that the imaging system can find the optimal imaging position of the lens 104 in a smaller range.
  • the imaging method further includes a step: g, making the lens 104 smaller than the first set step and larger than the second set step.
  • a long third set step moves toward the object, and calculates a first light intensity parameter based on the light intensity of the light received by the focusing module 106, and determines whether the first light intensity parameter is greater than the first set light intensity threshold;
  • step (d) is performed. In this way, by comparing the first light intensity parameter and the first set light intensity threshold, the interference caused by the optical signal with very weak contrast with the reflected light at the interface of the medium to the focusing / focusing can be eliminated.
  • the lens 104 is caused to continue to move toward the object at a third set step.
  • the focusing module 106 includes two light sensors 118.
  • the two light sensors 118 are used to receive light reflected by the object.
  • the first light intensity parameter is an average of the light intensities of the light received by the two light sensors 118. value. In this way, the first light intensity parameter is calculated by the average value of the light intensity of the light received by the two light sensors 118, so that it is more accurate to exclude weak light signals.
  • the third set step size S2 0.005 mm. It can be understood that, in other examples, the third setting step size may also adopt other values, which is not specifically limited herein.
  • the structural diagram of the imaging system may adopt the structural diagram of the imaging system of Embodiment 1. Understandably, the focusing method of Embodiment 2 is different from the focusing method or focusing logic of Embodiment 1, but The structure of the imaging system used is basically the same.
  • Focusing includes the following steps: S11, using the focusing module 106 to emit light onto the object; S12, moving the lens 104 to the first setting position; S13, moving the lens 104 from the first The setting position is moved toward the object in the first set step and it is determined whether the focusing module 106 receives the light reflected by the object; when the focusing module 106 receives the light reflected by the object, S14, the lens 104 is made smaller than the first
  • a second set step of a set step is moved and an image is captured by the imaging device 102 on the object, and it is determined whether the sharpness value of the image collected by the imaging device 102 reaches a set threshold; the sharpness value of the image When the set threshold is reached, S15, the current position of the lens 104 is stored as the storage position.
  • This method is particularly suitable for devices containing precise optical systems, such as optical inspection devices with high-power lenses, where it is not easy to find a clear plane.
  • the object is an object for which the focal plane position needs to be obtained. For example, if a first predetermined relationship needs to be determined, two objects may be selected in a first preset track, and the objects may be positioned in the first or the other at the same time.
  • the focal plane position data is another focal plane position data of the fifth object 48.
  • the objects are multiple positions (FOV) of the sample 300 applied in the sequence determination.
  • the object to be focused may be used as the first The object or the second object.
  • the object to be focused may be the fourth object or the fifth object.
  • the sample 300 includes a carrier device 200 and a sample 302 to be tested.
  • the sample 302 is a biomolecule, such as a nucleic acid.
  • the lens 104 is located above the carrier device 200.
  • the carrying device 200 has a front panel 202 and a rear panel (lower panel). Each panel has two surfaces.
  • the sample 302 to be tested is connected to the upper surface of the lower panel, that is, the sample to be tested 302 is located below the lower surface 204 of the front panel 202.
  • the imaging device 102 collects an image of the sample 302 to be tested
  • the sample 302 to be tested is the corresponding position (FOV) when the photo is taken, and the sample to be tested 302 is located under the front panel 202 of the carrying device 200 Below the surface 204, at the beginning of the focusing process, the movement of the lens 104 is to find the medium interface 204 where the sample 302 to be measured is located, so as to improve the success rate of the imaging device 102 in acquiring clear images.
  • the sample to be measured 302 is a solution
  • the front panel 202 of the carrier device 200 is glass
  • the medium interface 204 between the carrier device 200 and the sample to be measured 302 is the lower surface 204 of the front panel 202 of the carrier device 200. That is, the interface between glass and liquid.
  • the sample 302 to be collected by the imaging device 102 is located under the lower surface 204 of the front panel 202. At this time, the image collected by the imaging device 102 is used to determine the clear surface of the sample to be measured 302. This process It can be called focusing.
  • the thickness of the front panel 202 is 0.175 mm.
  • the carrying device 200 may be a glass slide, and the sample 302 to be tested is placed on the glass slide, or the sample 302 to be tested is sandwiched between two glass slides.
  • the carrier device 200 may be a reaction device, for example, a chip similar to a sandwich structure with a carrier panel above and below, and the sample 302 to be tested is disposed on the chip.
  • the imaging device 102 includes a microscope 107 and a camera 108, and the lens 104 includes an objective lens 110 and a camera lens 112 of the microscope.
  • the focusing module 106 can pass a dichroic beam splitter 114 (dichroic beam splitter 114). ) Is fixed with the camera lens 112, and the dichroic beam splitter 114 is located between the camera lens 112 and the objective lens 110.
  • the dichroic beam splitter 114 includes a dual C-mount splitter.
  • the dichroic beam splitter 114 can reflect the light emitted by the focusing module 106 to the objective lens 110 and can allow visible light to pass through and enter the camera 108 through the camera lens 112, as shown in FIG. 11.
  • the movement of the lens 104 is along the optical axis OP.
  • the movement of the lens 104 may refer to the movement of the objective lens 110, and the position of the lens 104 may refer to the position of the objective lens 110. In other embodiments, other lenses of the moving lens 104 may be selected to achieve focusing.
  • the microscope 107 also includes a tube lens 111 (tube lens) located between the objective lens 110 and the camera 108.
  • the stage can move the sample 200 in a plane (such as the XY plane) that is perpendicular to the optical axis OP (such as the Z axis) of the lens 104, and / or can drive the sample 300 along the optical axis OP of the lens 104 ( (Such as Z axis).
  • a plane such as the XY plane
  • the optical axis OP such as the Z axis
  • the stage can move the sample 200 in a plane (such as the XY plane) that is perpendicular to the optical axis OP (such as the Z axis) of the lens 104, and / or can drive the sample 300 along the optical axis OP of the lens 104 (Such as Z axis).
  • the plane on which the sample 300 moves by the stage is not perpendicular to the optical axis OP, that is, the angle between the moving plane of the sample and the XY plane is not 0, and the imaging method is still applicable.
  • the imaging device 102 can also drive the objective lens 110 to move along the optical axis OP of the lens 104 for focusing.
  • the imaging device 102 uses a driving member such as a stepping motor or a voice coil motor to drive the objective lens 110 to move.
  • the positions of the objective lens 110, the stage, and the sample 300 may be set on the negative axis of the Z axis, and the first set position may be the negative of the Z axis. Coordinate position on the axis. It can be understood that, in other embodiments, the relationship between the coordinate system and the camera and the objective lens 110 may also be adjusted according to the actual situation, which is not specifically limited herein.
  • the imaging device 102 includes a total internal reflection fluorescence microscope
  • the objective lens 110 has a magnification of 60 times
  • the first set step S1 0.01 mm.
  • the first setting step S1 is more suitable, because S1 is too large to cross the acceptable focus range, and S1 is too small to increase the time overhead.
  • the lens 104 is caused to continue to move toward the sample 300 and the object along the optical axis OP with a first set step.
  • the lens 104 when the sharpness value of the image does not reach the set threshold, the lens 104 is caused to continue to move along the optical axis OP with the second set step.
  • the imaging system is applicable to a sequence determination system, or in other words, the sequence determination system includes an imaging system.
  • the lens 104 when the lens 104 moves, it is determined whether the current position of the lens 104 exceeds the second set position; when the current position of the lens 104 exceeds the second set position, the movement of the lens 104 is stopped or the focusing step is performed.
  • the first setting position and the second setting position can limit the movement range of the lens 104, which can stop the lens 104 from moving when it cannot focus successfully, avoiding waste of resources or damage to the equipment, or making the lens 104 unable to Refocusing when the focus is successful improves the automation of the imaging method.
  • the settings are adjusted so that the moving range of the lens 104 is as small as possible under the condition that the solution can be implemented.
  • the movement range of the lens 104 can be set to 200 ⁇ m ⁇ 10 ⁇ m or [190 ⁇ m, 250 ⁇ m].
  • another set position may be determined according to a predetermined movement range and a setting of any one of the second set position and the first set position.
  • the second setting position is set to the position at the lowest position of the upper surface 205 of the front panel 202 of the reaction device 200 and then to the next depth of field position.
  • the movement range of the lens 104 is set to 250 ⁇ m.
  • the first setting The location is determined.
  • the coordinate position corresponding to the next depth-of-field position is a position that becomes smaller along the negative direction of the Z axis.
  • the moving range is a section on the negative axis of the Z axis.
  • the first set position is nearlimit and the second set position is farlimit.
  • the size of the movement range defined between nearlimit and farlimit is 350 ⁇ m. Therefore, when the coordinate position corresponding to the current position of the lens 104 is smaller than the coordinate position corresponding to the second set position, it is determined that the current position of the lens 104 exceeds the second set position.
  • the position of the farlimit is the position of the next depth of field L at the lowest position of the upper surface 205 of the front panel 202 of the reaction device 200.
  • the depth of field L is the depth of field of the lens 104.
  • the coordinate position corresponding to the first setting position and / or the second setting position may be specifically set according to actual conditions, and is not specifically limited herein.
  • the focusing module 106 includes a light source 116 and a light sensor 118.
  • the light source 116 is configured to emit light onto an object
  • the light sensor 118 is configured to receive light reflected by the object. In this way, light emission and light reception of the focusing module 106 can be achieved.
  • the light source 116 may be an infrared light source 116 and the light sensor 118 may be a photodiode.
  • the infrared light emitted by the light source 116 enters the objective lens 110 after being reflected by the dichroic beam splitter, and is projected onto the sample 300 and the object through the objective lens 110.
  • the object may reflect infrared light projected through the objective lens 110.
  • the sample 300 includes the carrying device 200 and the sample 302 to be measured
  • the light reflected by the received object is light reflected by the lower surface 204 of the front panel of the carrying device 200.
  • the distance between the objective lens 110 and the object is within a suitable range of optical imaging, and can be used for imaging by the imaging device 102. In one example, the distance is 20-40 ⁇ m.
  • the lens 104 is moved by a second set step smaller than the first set step, so that the imaging system can find the optimal imaging position of the lens 104 in a smaller range.
  • the sharpness value of the image can be used as the evaluation value of the image focus.
  • determining whether the sharpness value of the image collected by the imaging device 102 reaches a set threshold may be performed by a mountain climbing algorithm for image processing.
  • a mountain climbing algorithm for image processing By calculating the sharpness value of the image output by the imaging device 102 when the objective lens 110 is at each position, it is determined whether the sharpness value reaches the maximum value at the peak of the sharpness value, and then whether the lens 104 reaches the clear surface when the imaging device 102 is imaging Location. It can be understood that, in other embodiments, other image processing algorithms can also be used to determine whether the sharpness value reaches the maximum value at the peak.
  • saving the current position of the lens 104 as the saving position can enable the imaging device 102 to output a clear image when taking a sequence measurement reaction to take a picture.
  • focusing further includes a step: S16, making the lens 104 smaller than the first setting step and larger than the second setting step.
  • a long third set step moves toward the object, and calculates a first light intensity parameter based on the light intensity of the light received by the focusing module 106, and determines whether the first light intensity parameter is greater than the first set light intensity threshold;
  • step S14 is performed. In this way, by comparing the first light intensity parameter and the first set light intensity threshold, the interference caused by the optical signal with very weak contrast with the reflected light at the interface of the medium to the focusing / focusing can be eliminated.
  • the lens 104 is further moved toward the object along the optical axis OP with a third set step.
  • the focusing module 106 includes two light sensors 118, and the two light sensors 118 are used to receive light reflected by the object.
  • the first light intensity parameter is the light intensity of the light received by the two light sensors 118. average value. In this way, the first light intensity parameter is calculated by the average value of the light intensity of the light received by the two light sensors 118, so that it is more accurate to exclude weak light signals.
  • the third set step size S2 0.005 mm. It can be understood that, in other examples, the third setting step size may also adopt other values, which is not specifically limited herein.
  • the method further includes the following steps: S16, making the lens 104 smaller than the first setting step and larger than the second setting.
  • the third set step with a fixed step is moved toward the object, and a first light intensity parameter is calculated according to the light intensity of the light received by the focusing module 106, and it is determined whether the first light intensity parameter is greater than the first set light intensity threshold ;
  • the lens 104 is moved toward the object at a fourth set step smaller than the third set step and larger than the second set step, and
  • the second light intensity parameter is calculated according to the light intensity of the light received by the focusing module 106, and it is determined whether the second light intensity parameter is smaller than the second set light intensity threshold; when the second light intensity parameter is less than the second set light intensity threshold In this case, step S14 is performed.
  • the first light intensity parameter and the first set light intensity threshold interference with focusing / focusing caused by a light signal having a very weak contrast with the reflected light at the interface of the medium can be eliminated; and by the second light intensity parameter and The comparison of the second set light intensity threshold can exclude the strong reflection light signal at the position of the non-dielectric interface, such as the interference of the light signal reflected by the oil surface / air of the objective lens 110 on the focusing / focusing.
  • the lens 104 is further moved toward the object along the optical axis OP with a third set step.
  • the lens 104 is caused to continue to move toward the object along the optical axis OP with a fourth set step.
  • the third set step size S2 0.005 mm
  • the fourth set step size S3 0.002 mm. It can be understood that, in other examples, the third set step size and the fourth set step size may also adopt other values, which are not specifically limited herein.
  • the focusing module 106 includes two light sensors 118, and the two light sensors 118 are used to receive light reflected by the object.
  • the first light intensity parameter is the light intensity of the light received by the two light sensors 118.
  • the light intensity of the light received by the two light sensors 118 has a first difference
  • the second light intensity parameter is the difference between the first difference and the set compensation value. In this way, the second light intensity parameter is calculated by using the light intensities of the light received by the two light sensors 118, so that it is more accurate to exclude the strongly reflected light signal.
  • the lens 104 when the lens 104 is moved by the second set step, it is determined whether the first sharpness value of the pattern corresponding to the current position of the lens 104 is greater than the first sharpness value of the image corresponding to the previous position of the lens 104.
  • Two sharpness values when the first sharpness value is greater than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is greater than the set difference, the lens 104 If the first sharpness value is greater than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is less than the set difference value, the lens 104 moves The fifth set step smaller than the second set step continues to move toward the object so that the sharpness value of the image collected by the imaging device 102 reaches a set threshold; when the second sharpness value is greater than the first sharpness value and When the sharpness difference between the second sharpness value and the first sharpness value is greater than the set difference, the lens 104 is moved away from the object at a second set step; when the second sharp
  • the second set step size can be used as the coarse adjustment step size Z1
  • the fifth set step size can be used as the fine adjustment step size Z2
  • the coarse adjustment range Z3 can be set. Setting the coarse adjustment range Z3 can stop the movement of the lens 104 when the sharpness value of the image cannot reach the set threshold, thereby saving resources.
  • the coarse adjustment range Z3 is the adjustment range, that is, the adjustment range on the Z axis is (T, T + Z3).
  • R1> R2 and R1-R2> R0 it means that the sharpness value of the image is closer to the set threshold and farther from the set threshold, so that the lens 104 continues to move in the first direction with a step Z1 to quickly move toward Set the threshold close.
  • R1> R2 and R1-R2 ⁇ R0 it means that the sharpness value of the image is close to the set threshold and closer to the set threshold, so that the lens 104 moves in the first direction with a step Z2 and in smaller steps
  • the long-term set threshold is close.
  • R2> R1 and R2-R1> R0 it means that the sharpness value of the image has crossed the set threshold and is far away from the set threshold, so that the lens 104 is in a second direction opposite to the first direction with a step Z1. (Such as in the direction away from the object along the optical axis OP) to quickly approach the set threshold.
  • the fifth set step size can be adjusted to adapt to a step size that is not too large or too small when approaching the set threshold.
  • the above-mentioned value is a metric value used when moving the lens 104 during the image acquisition process of the imaging device 102, and the metric value is related to the light intensity. Setting the threshold can be understood as the peak of the focus curve or a range centered on the peak or a range including the peak.
  • an imaging system (not shown) for imaging an object according to an embodiment of the present invention.
  • the imaging system includes a lens 104 and a control device.
  • the object includes first The object 42, the second object 44, and the third object 46.
  • the control device includes a computer-executable program, and the execution of the computer-executable program includes the steps of the imaging method of any one of the foregoing embodiments.
  • the first predetermined relationship is determined by the focus positions of the first object 42 and the second object 44.
  • the focal plane can be directly performed according to the first predetermined relationship. It is predicted that a clear image of the third object is obtained without focusing, which is particularly suitable for a scene with a large number of objects and it is desired to acquire images of these objects quickly and continuously.
  • the imaging system has high imaging efficiency, and the imaging system itself fails to focus. It can still accurately determine the focal plane position of subsequent objects, obtain image information of subsequent objects in continuous image acquisition, and use it with the focusing system built in the imaging system itself, which can save the focusing failure of the tracking system that comes with the imaging system. After the normal focus can not be resumed.
  • the third object 46 is located between the first object 42 and the second object 44.
  • the lens 104 is fixed, the lens 104 includes an optical axis OP, and the first preset track 43 can move in a direction perpendicular to or parallel to the optical axis OP.
  • the determination of the first predetermined relationship includes:
  • a first predetermined relationship is established according to the first coordinate and the second coordinate, the first coordinate reflects the focal plane position of the first object 42, and the second coordinate reflects the focal plane position of the second object 44.
  • the first preset track 43 is a linear or non-linear track; and / or the first predetermined relationship is a linear relationship.
  • the object includes a fourth object 47 and a fifth object 48 located at different positions of the second preset track 45, and the control device is configured to:
  • the lens 104 and the second preset track 45 are relatively moved according to a second predetermined relationship to use the imaging system to obtain an image of the fifth object 48 without focusing.
  • the second predetermined relationship is obtained through the focal plane position of the fourth object 47 and the first The predetermined relationship determines that the second preset track 45 is different from the first preset track 43.
  • the lens 104 is fixed, the lens 104 includes an optical axis OP, and the second preset track 45 can move in a direction perpendicular to or parallel to the optical axis OP.
  • the determination of the second predetermined relationship includes:
  • a second predetermined relationship is established according to the first predetermined relationship and the fourth coordinate, and the fourth coordinate reflects the focal plane position of the fourth object 47.
  • control device is configured to: after acquiring the image of the third object 46, move the lens 104 relative to the first preset track 43 and / or the second preset track 45 to use the imaging system without focusing An image of the fifth object 48 is acquired.
  • the imaging system includes an imaging device 102 and a stage.
  • the imaging device 102 includes a lens 104 and a focusing module 106.
  • the lens 104 includes an optical axis OP.
  • the lens 104 can move in the direction of the optical axis OP.
  • the first preset The track 43 and / or the second preset track 45 are located on the carrier 103.
  • control device is configured to perform the following steps:
  • the lens 104 is moved from the second setting position by a second setting step, and the image of the object is obtained by using the imaging device 102 at each step position, and the second setting step is smaller than the first setting step;
  • the first range includes relative first and second intervals, and defines that the second interval is closer to the object.
  • Step (e) includes:
  • step (f) includes: comparing the image evaluation result with a preset condition, and if the image evaluation result meets the preset condition, saving the position of the lens 104 corresponding to the image;
  • the lens 104 is moved to the third setting position, and the third setting position is located in another section in the first range that is different from the section in which the second setting position is located.
  • the image evaluation result includes a first evaluation value and a second evaluation value
  • the second set step size includes a coarse step size and a fine step size
  • step (f) includes: the lens 104 moves with a coarse step size until The first evaluation value of the image at the corresponding position is not greater than the first threshold.
  • the second evaluation value of the image that the lens 104 continues to move to the corresponding position with the fine step size is the largest, and is saved corresponding to the image when the second evaluation value is the maximum.
  • the position of the lens 104 includes a first evaluation value and a second evaluation value
  • the second set step size includes a coarse step size and a fine step size
  • step (f) includes: the lens 104 moves with a coarse step size until The first evaluation value of the image at the corresponding position is not greater than the first threshold.
  • the second evaluation value of the image that the lens 104 continues to move to the corresponding position with the fine step size is the largest, and is saved corresponding to the image when the second evaluation value is the maximum.
  • the image evaluation result includes a first evaluation value, a second evaluation value, and a third evaluation value, and the image includes a plurality of pixels;
  • the preset conditions are that the number of bright spots on the image is greater than the preset value, the first evaluation value of the image at the corresponding position is not greater than the first threshold, and the second evaluation value of the image at the corresponding position is before and after the image at the corresponding position.
  • the second evaluation value of the N images is the largest; or
  • the preset conditions are that the number of bright spots on the image is less than the preset value, the first evaluation value of the image at the corresponding position is not greater than the first threshold, and the third evaluation value of the image at the corresponding position is N before and after the current image.
  • the third evaluation value of the image is the largest.
  • the imaging system includes a bright spot detection module, and the bright spot detection module is configured to:
  • k1 * k2 matrix uses the k1 * k2 matrix to detect bright spots on the image, including determining that the center pixel value of the matrix is not less than any non-center pixel value of the matrix corresponds to a bright spot, k1 and k2 are odd numbers greater than 1, and the k1 * k2 matrix contains k1 * k2 pixels.
  • the center pixel value of a matrix corresponding to a bright spot is greater than a first preset value
  • any pixel value of a non-matrix of the matrix is greater than a second preset value
  • the first preset value and the second preset value are equal to The average pixel value of the image is related.
  • the first evaluation value is determined by counting the size of the connected domain corresponding to the bright spots of the image.
  • the center of the line is the size of the connected field in the row, B represents the size of the connected field in the column centered on the center of the matrix corresponding to the bright spot, and the connected pixel points that are larger than the average pixel value of the image are defined as a connected field.
  • the second evaluation value and / or the third evaluation value are determined by counting the scores of the bright spots of the image.
  • the focusing module 106 includes a light source 116 and a light sensor 118.
  • the light source 116 is configured to emit light onto an object
  • the light sensor 118 is configured to receive light reflected by the object.
  • control device when the focus module 106 receives the light reflected by the object, the control device is further configured to:
  • the lens 104 When the first light intensity parameter is greater than the first set light intensity threshold, the lens 104 is moved from the current position to the second set position.
  • the focusing module 106 includes two light sensors 118.
  • the two light sensors 118 are used to receive light reflected by the object.
  • the first light intensity parameter is a value of the light intensity of the light received by the two light sensors 118. average value.
  • control device when the lens 104 moves, the control device is configured to: determine whether the current position of the lens 104 exceeds the fourth set position;
  • control device is used to:
  • the lens 104 When the focusing module 106 receives the light reflected by the object, the lens 104 is moved by a second set step smaller than the first set step and the imaging device 102 is used to perform image acquisition on the subject and determine the imaging device 102 Whether the sharpness value of the captured image reaches a set threshold;
  • the current position of the lens 104 is stored as the storage position.
  • the focusing module 106 includes a light source 116 and a light sensor 118.
  • the light source 116 is configured to emit light onto an object
  • the light sensor 118 is configured to receive light reflected by the object.
  • control device when the focus module 106 receives light reflected by the subject, the control device is configured to:
  • the lens 104 is moved by a second set step and an image is captured by the imaging device 102 on the object, and the The step of whether the sharpness value reaches a set threshold.
  • the focusing module 106 includes two light sensors 118.
  • the two light sensors 118 are configured to receive light reflected by the object.
  • the first light intensity parameter is the light intensity of the light received by the two light sensors 118. average of.
  • control device when the focus module 106 receives light reflected by the subject, the control device is configured to:
  • the lens 104 When the first light intensity parameter is greater than the first set light intensity threshold, the lens 104 is moved toward the object at a fourth set step smaller than the third set step and larger than the second set step, and according to the focus mode
  • the light intensity of the light received by the group 106 calculates a second light intensity parameter, and determines whether the second light intensity parameter is less than a second set light intensity threshold;
  • the lens 104 is moved by the second set step and the imaging device 102 is used to acquire an image of the object, and the image of the image collected by the imaging device 102 is determined.
  • the step of whether the sharpness value reaches a set threshold.
  • the focusing module 106 includes two light sensors 118.
  • the two light sensors 118 are configured to receive light reflected by the object.
  • the first light intensity parameter is the light intensity of the light received by the two light sensors 118.
  • the light intensity of the light received by the two light sensors 118 has a first difference
  • the second light intensity parameter is a difference between the first difference and a set compensation value.
  • the control device when the lens 104 is moved by the second set step, is configured to determine whether the first sharpness value of the pattern corresponding to the current position of the lens 104 is greater than the previous position of the lens 104 The second sharpness value of the corresponding image;
  • the lens 104 is caused to continue to move toward the second set step.
  • the lens 104 is made smaller than the second set step size by The fifth set step continues to move toward the object so that the sharpness value of the image collected by the imaging device 102 reaches a set threshold;
  • the lens 104 is moved away from the object by the second set step mobile;
  • the lens 104 is moved away from the object at a fifth set step Move so that the sharpness value of the image collected by the imaging device 102 reaches a set threshold.
  • control device when the lens 104 moves, the control device is configured to: determine whether the current position of the lens 104 exceeds the second set position;
  • the lens 104 is stopped from moving or the focusing step is performed.
  • a computer-readable storage medium is configured to store a program for execution by a computer, and the execution program includes steps for completing the imaging method of any one of the foregoing embodiments.
  • Computer-readable storage media may include: read-only memory, random access memory, magnetic disks, or optical disks.
  • a computer program product includes instructions. When the instructions are executed by a computer, the instructions cause the computer to perform the steps of the imaging method of any of the foregoing embodiments.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist separately physically, or two or more units may be integrated into one module.
  • the above integrated modules may be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.

Abstract

本发明公开了一种成像方法和成像系统,成像方法利用成像系统对对象进行成像,成像系统包括镜头,对象包括位于第一预设轨道不同位置的的第一对象、第二对象和第三对象,成像方法包括:使镜头和第一预设轨道依据第一预定关系相对运动,以利用成像系统、无需对焦地获得第三对象的清晰图像,第一预定关系通过第一对象的焦面位置和第二对象的焦面位置确定。上述成像方法的成像效率较高,并且在追焦失败的情况下仍能根据第一预定关系实现快速对焦,避免出现脱焦而导致拍摄图像模糊的情况。

Description

成像方法、装置及系统 技术领域
本发明涉及光学检测领域,尤其涉及一种成像方法、装置及系统。
背景技术
在相关技术中,相机在每次拍摄时,都会快速调整焦距以获得最清晰的焦面,从而得到清晰图片,这一过程称之为追焦。
但是在实际的应用中利用相机进行拍摄时,容易存在一些外界的干扰,例如,使用相机拍摄时,对象因存在其它杂光或表面存在灰尘或划痕,从而导致相机追焦失败,而在相机追焦失败的情况下,如果相机无法重新追焦,则会导致图像成像模糊。例如,在将相机应用于序列测定时,如果对象为位于芯片中的核酸分子,在拍摄芯片内部的液体带有气泡,大团荧光杂质或芯片表面灰尘、划痕等的情况,均容易导致相机追焦失败。
发明内容
在基于成像获取核酸信息的测序平台,例如目前市面上的利用拍照获得核酸信息的二代或三代测序平台,包含利用成像系统对置于反应器中的核酸进行拍照的过程。
常见的,反应器也被称为芯片(Flowcell),芯片可包含一条或多条平行的通道(channel),通道用于出入以及承载试剂以形成序列测定反应所需的环境。芯片可以由两块玻璃粘合而成,测序过程包含相机对芯片的一个固定区域进行多轮拍照,每次拍摄的区域可称之为FOV(field of view),每一轮拍照可称为一个cycle,两个cycle之间包含重新通入试剂进行化学反应。
在正常拍照过程中,相机能够大多数自动追焦成功,即找到最清晰的焦面位置。在遇到干扰时,则有可能出现追焦失败的情况。
图1-3示意发明人试验中出现的成功追焦和异常追焦或失败追焦的数据。
以连续拍摄同一cycle的两行FOV为例,记录拍摄时物镜高度(Z值)坐标,如图1所示,横坐标为FOV的序号,前一半FOV是从Flowcell左侧至右侧顺序拍摄,后一半FOV换行后从右至左拍摄。纵坐标为显微镜物镜距离相机高度,即Z值,单位为μm,负值表示位于显微镜物镜位于相机下方,Z值绝对值越大,说明物镜距离相机越远。
图1显示成功追焦拍摄300个FOV图像所对应的Z值曲线,图2显示拍摄200个FOV图像中包含部分追焦异常(反映为部分Z值异常)所对应的Z值曲线,在这种情况下,曲线异常部分即其中表现为凸起部分所对应的图像为非清晰/模糊图像。
由于相机追焦存在一定局限,在遇到干扰后容易脱焦,脱焦后由于物镜远离焦面,即追焦拍摄后续FOV物镜与焦面位置的距离过大,即便干扰消除也无法回到焦面,这种情况如图3所示。图3中的前1-200个FOV属于一个cycle,后面的FOV属于另一cycle。图3显示在第268个FOV(位于另一cycle第一行)后,追焦失败,且在干扰消失后,直到该cycle结束也未能重新追焦成功。
追焦失败意味着模糊图像,这会导致信息丢失。因此这是一个必须解决的问题。现实情况下,无法完全杜绝干扰,但一般地,至少希望,在干扰消失后,又能够获取到清晰图像。
发明人在分析大量的追焦成功和追焦异常的数据中发现,在物镜固定的情况下,不同cycle (亦即不同时间)正常追焦相同的多个FOV所对应的多条Z值曲线呈现出一定的规律。如图4所示,展示了300个FOV在4个不同cycle正常追焦获得清晰图片对应的Z值曲线。
发明人发现两个规律:
1)同一个位置(FOV),在不同cycle时可能具有不同焦面,但相对于同一cycle的其他FOV来说,其焦面的相对位置基本未改变。即在物理位置上,同一cycle的不同FOV之间的焦面具有关联性。
2)图上每条曲线的300个FOV,一半是从Flowcell的一行的左侧拍摄至右侧,另一半则是换行后,从右侧拍摄至左侧,由于Flowcell的形变和/或左右侧的高低差,同一行同一方向(例如,左到右或者右到左)连续多个FOV的焦面呈一定的规律,可以较好的拟合成一条直线。
呈现上述规律,发明人猜测可能的原因包括:由于需要在不同cycle重复拍摄相同的FOV,经过加热与试剂流通后,芯片内部压力发生变化,焦面发生了整体偏移。而相对于整个芯片来说,每个FOV很小,每个FOV的表面平整度可看成不变,表现为相邻FOV之间的相对焦面位置保持不变。
基于发现的上述规律,发明人开发出一套算法,能够在不更换硬件的情况下,通过软件算法的辅助,使相机具备焦面预测功能。具体的,例如,在cycle1中,对于处于同一预设轨道(第一预设轨道,例如同一行)的多个FOV,可以对焦获取其中的两个FOV的焦面,计算出二者的焦面差,通过线性拟合获得关系(如第一预定关系),利用该关系预测该行其它FOV的焦面位置。而对于cycle2及以后的cycle,通过记忆上述cycle1或者任一前面cycle的任一FOV正常对焦的焦面,再对焦确定当前cycle的该FOV的焦面位置,就能够线性回归建立关系预测当前cycle的其他任何FOV的焦面。
利用线性回归建立关系,表示为公式(a)y=kx+b,需要确定斜率k(也可称为变化趋势k)和截距b(也可称为基础偏移量b)。基于上述规律1)可知k=1,因此公式(a)可转换为公式(b)y=x+b,可基于同一cycle同一轨道上的任意两个FOV焦面的相对位置和Z值确定b。
例如,对于cycle1,其基础偏移量b可通过总体焦面差(例如从轨道的一端到该轨道的另一端)算得,具体地,对焦获得cyc1FOVZ(r)和cyc1FOVZ(l),cyc1FOVZ(r)和cyc1FOVZ(l)分别表示cycle1中一个轨道的一端和另一端的两个对象(可称为两个位置或两个FOV)的焦面位置Z值,可计算截距b=(cyc1FOVZ(r)–cyc1FOVZ(l))/FOVNum,FOVNum表示cyc1FOVZ(r)和cyc1FOVZ(l)两个位置之间的FOV数目,可利用公式(b)预测cycle1的cyc1FOVZ(n+1),b)中的cyc1FOVZ(n)和cyc1FOVZ(n+1)表示相邻的两个位置(FOV)且cyc1FOVZ(n+1)相对更靠近cyc1FOVZ(r),cyc1FOVZ(n)可通过对焦获得。
需要说明的,b的确定可以通过同一轨道上的两个FOV的焦面信息。还有,也可以利用已确定的公式(b)以及任一已对焦的FOV的焦面坐标信息,在这里,比如利用已确定的关系(b)和已确定的cyc1FOVZ(r)和cyc1FOVZ(l)中的任一值,来确定cyc1FOVZ(n+1)。
某一cycle的线性关系确定了之后,对于后续拍摄相同轨道/相同FOV的任一cycle,可以基于该已确定的线性关系以及当前cycle任一FOV的焦面位置,预测当前cycle任何FOV的焦面位置。例如,利用当前cycle中的FOV(n)(第N个或第N位置的FOV)的焦面位置预测相 同cycle中的FOV(n+1)(第N+1个或第N+1位置的FOV)的焦面位置,我们可将第N个FOV的Z值curFOVZ(n)作为因变量带入公式(b)中,获得的y即为curFOVZ(n+1)。
另外,某一cycle的线性关系确定了之后,对于后续拍摄相同轨道/相同FOV的任一cycle,也可以基于该已确定的线性关系确定在该cycle中的两个FOV的焦面位置以及当前cycle的相同FOV中的一个的焦面位置,来预测当前cycle的相同FOV中的另一个的焦面位置。例如,在上一cycle中确定了公式(b),利用当前cycle中的FOV(n)(第N个或第N位置的FOV)的焦面位置预测相同cycle中的FOV(n+1)(第N+1个或第N+1位置的FOV)的焦面位置,我们可通过公式(b)确定上一cycle中的FOV(n)和FOV(n+1)的焦面位置,分别表示为preFOVZ(n)和preFOVZ(n+1),利用公式(c)curFOVZ(n+1)=curFOVZ(n)+(preFOVZ(n+1)–preFOVZ(n)),来确定curFOVZ(n+1)。
需要说明的是,上述规律的发现和解释以及示意建立的关系为线性关系仅为描述或理解方便,本领域技术人员可以理解,所称的第一预设轨道可以是直线,也可以是曲线,任意一条曲线可以看成是多条线段的拟合。对此,相信通过上述对作出本发明的的相关情景包括规律的发现和关系的建立的示例说明,对于第一预设轨道是曲线的情形,本领域技术人员能够遵循本发明的构思,将该曲线型第一预设轨道看成是一组线段,对应的,可建立出包含一组线性关系的第一预设关系,以实现无需对焦地预测该轨道上的对象的焦面位置。
在不更换硬件的情况下,可以通过本实施方式的成像方法,让相机重新回到焦面附近,再开始拍照。基于以上发现以及解释说明示意,本发明提供一种成像方法、一种成像装置、一种成像系统以及一种测序系统。
本发明实施方式的一种成像方法,利用成像系统对对象进行成像,成像系统包括镜头,对象包括位于第一预设轨道不同位置的第一对象、第二对象和第三对象,成像方法包括:使镜头和第一预设轨道依据第一预定关系相对运动,以利用成像系统、无需对焦地获得第三对象的清晰图像,第一预定关系通过第一对象的焦面位置和第二对象的焦面位置确定。
本发明实施方式的一种成像系统,对对象进行成像,成像系统包括镜头和控制装置,对象包括位于第一预设轨道不同位置的第一对象、第二对象和第三对象,控制装置用于:使镜头和第一预设轨道依据第一预定关系相对运动,以利用成像系统、无需对焦地获得第三对象的清晰图像,第一预定关系通过第一对象的焦面位置和第二对象的焦面位置确定。
上述成像方法及系统中,通过第一对象和第二对象的对焦位置来确定第一预定关系,对该第一预设轨道上的其它对象成像时,可根据该第一预定关系直接进行焦面预测,无需对焦的获取第三对象的清晰图像,尤其适于对象的数量较多且希望快速连续获取这些对象的图像的情景,该方法成像效率高,并且在成像系统自身追焦失败的情况下仍能准确确定后续对象的焦面位置,获取连续图像采集中的后续对象的图像信息,配合成像系统本身自带的追焦系统使用,能够挽救成像系统自带的追焦系统追焦失败后无法重新正常追焦的情况。
本发明实施方式的一种测序装置,包括上述实施方式的成像系统。
本发明实施方式的一种计算机可读存储介质,用于存储供计算机执行的程序,执行程序包括完成上述实施方式的方法的步骤。计算机可读存储介质可以包括:只读存储器、随机存储器、磁盘或光盘等。
本发明实施方式的一种成像系统,用于对对象进行成像,成像系统包括镜头和控制装置, 对象包括位于第一预设轨道不同位置的第一对象、第二对象和第三对象,控制装置包括计算机可执行程序,执行计算机可执行程序包括完成上述实施方式的方法的步骤。
本发明实施方式的一种计算机程序产品,包含指令,当指令被计算机执行时,指令使得计算机执行上述实施方式的方法的步骤。
本发明实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明实施方式的实践了解到。
附图说明
本发明实施方式的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是序列测定时追焦成功所对应的Z值曲线图。
图2是序列测定时异常凸起部分FOV出现追焦失败时所对应的Z值曲线图。
图3是序列测定时追焦失败且在干扰消失后,循环拍照结束时也未能重新追焦成功的Z值曲线图。
图4是序列测定时对象的对焦数据所形成不同的对焦位置的示意图。
图5是本发明实施方式的第一预设轨道和第二预设轨道的结构示意图。
图6是序列测定时在无干扰下对象的对焦数据所形成的对焦位置的示意图。
图7是序列测定时在有干扰下重新追焦成功时对象的对焦数据所形成的对焦位置的示意图。
图8是序列测定时在有干扰下无法重新追焦时对象的对焦数据所形成的对焦位置的示意图。
图9是本发明实施方式的对焦方法的流程示意图。
图10是本发明实施方式的镜头与对象的位置关系示意图。
图11是本发明实施方式的成像系统的部分结构示意图。
图12是本发明实施方式的图像的连通域的示意图。
图13是本发明实施方式的对焦方法的另一流程示意图。
图14是本发明实施方式的对焦方法的又一流程示意图。
图15是本发明实施方式的对焦方法的再一流程示意图。
图16是本发明实施方式的对焦方法的又再一流程示意图。
具体实施方式
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
本申请要求申请号为201810814359.0和201810813660.X的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
在本发明的描述中,需要理解的是,“第一”、“第二”、“第三”、“第四”和“第五”仅为方便描述,不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本发明的描述中,除非另有明确的规定和限定,“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通信;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
对于术语“中心”、“厚度”、“上”、“下”、“前”、“后”等指示的方位或位置关系为基于具体实施方式或附图所示的方位或位置关系,仅是为了便于描述和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作。
所称的“不变”,例如涉及距离、物距和/或相对位置等的可以表现为数值、数值范围或量上的变化,可以是绝对不变,也可以是相对不变,所称的相对不变为保持在一定偏差范围或者预设的可接受范围。如无另外说明,涉及距离、物距和/或相对位置的“不变”为相对不变。
下文的公开提供了多个实现本发明技术方案的实施方式或例子。本发明可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设定之间的关系。
本发明实施方式所称的“序列测定”同核酸序列测定,包括DNA测序和/或RNA测序,包括长片段测序和/或短片段测序。所称的“序列测定反应”同测序反应。
本发明实施方式提供一种成像方法,利用成像系统对对象进行成像。请结合图5、图11及图12,成像系统包括镜头104,对象包括位于第一预设轨道43不同位置的第一对象42、第二对象44和第三对象46,成像方法包括:使镜头104和第一预设轨道43依据第一预定关系相对运动,以利用成像系统、无需对焦地获得第三对象46的图像,第一预定关系通过第一对象42的焦面位置和第二对象44的焦面位置确定。
上述成像方法中,通过第一对象42和第二对象44的对焦位置来确定第一预定关系,对该第一预设轨道上的其它对象成像时,可根据该第一预定关系直接进行焦面预测,无需对焦的获取第三对象的清晰图像,尤其适于对象的数量较多且希望快速连续获取这些对象的图像的情景,该方法成像效率高,并且在成像系统自身追焦失败的情况下仍能准确确定后续对象的焦面位置,获取连续图像采集中的后续对象的图像信息,配合成像系统本身自带的追焦系统使用,能够挽救成像系统自带的追焦系统追焦失败后无法重新正常追焦的情况。
具体地,在图5的示例中,第一预设轨道43可为线性轨道,第一对象42和第二对象44位于线性轨道的两个位置,例如,位于线性轨道的两端,可以理解,第三对象46的数量可为多个,多个第三对象46依次在第一预设轨道43排列,且第三对象46位于第一对象42和第二对象44之间。可以理解,在其它示例中,第三对象46可位于不同于第一对象42和第二对象44的位置的其它位置。在其它示例中,第一预设轨道43可为非线性轨道,例如曲线状轨道,曲线状轨道可看成是多条线段的拟合,第一对象、第二对象和第三对象位于该曲线轨道中的相同线段。
在某些实施方式中,第一预定关系可为线性关系。在一个实施例中,请结合图5,当第一预设轨道43是序列测序过程中所使用的芯片500的一个或多个通道52时,被成像的第三对象46是位于通道52内的一个或多个位置(FOV),拍照时,镜头和第一预设轨道43可沿第一方向A相对移动,例如,镜头104是固定的,镜头104包括光轴OP,第一预设轨道43沿垂直于 光轴OP方向运动。可以理解,在某些实施方式中,第一预设轨道43能够沿平行于光轴OP方向运动。可根据实际调整的需求,来对第一预设轨道43进行移动。
成像系统包括相机108,镜头104可安装在相机108上,相机108采集经过镜头104的光线进行成像。
在某些实施方式中,使镜头104与第一预设轨道43相对运动,包括以下至少一种:使镜头104固定,移动第一预设轨道43;使第一预设轨道43固定,移动镜头104;同时移动镜头104和第一预设轨道43。
如此,镜头104和第一预设轨道43的移动方式多种多样,适应性强,提高了成像方法的应用范围。
具体地,移动第一预设轨道43时,可将第一预设轨道43放置在载台上,载台可带动第一预设轨道43和对象沿垂直于镜头104的光轴OP的方向来回平移,以将其中一个第三对象46置于镜头104下方,使成像系统对该第三对象46进行成像。
移动镜头104时,可将镜头104安装在驱动机构上,驱动机构可利用电动或手动的方式驱动镜头104沿垂直于镜头104光轴OP的方向来回平移,以使镜头104移动到其中一个第三对象46上方,使成像系统对该对象进行成像。
同时移动镜头104和第一预设轨道43,可以理解为,可先移动镜头104,再移动第一预设轨道43,使其中一个第三对象46位于镜头104下方;也可是先移动第一预设轨道43,再移动镜头104,使镜头104位于其中一个第三对象46上方,还可以是一边移动镜头104,一边移动第一预设轨道43,使镜头位于其中一个第三对象46上方。
在某些实施方式中,第一预定关系的确定包括:利用成像系统对第一对象42进行对焦,确定第一坐标;利用成像系统对第二对象44进行对焦,确定第二坐标;依据第一坐标和第二坐标建立第一预定关系,第一坐标反映第一对象42的焦面位置,第二坐标反映第二对象44的焦面位置。如此,可预先确定第一预定关系,在执行其它对象的成像时,根据第一预定关系,就能利用成像系统、无需对焦地获得其它对象的清晰图像,简化了成像方法及提高了成像方法的效率。
具体地,根据上述的一个实施例,请结合图5,当被成像的第三对象46是序列测定所使用的芯片500的一个或多个位置时,第一对象42、第二对象44和第三对象46可位于芯片500的同一个通道内。
较佳地,第一对象42、第三对象46和第二对象44依次在第一预设轨道43排列。根据上述的一个实施例,第一方向A是从芯片500左至右的方向,也就是说,沿芯片500左至右的方向A,第一对象42、第三对象46和第二对象44依次在第一预设轨道43排列。在其它实施例中,第一对象42、第三对象46和第二对象44还可以是其它顺序在第一预设轨道43排列。
可以理解,在确定第一预定关系时,可在第一预设轨道43上选取两个对象:第一对象42和第二对象44进行对焦,以获取这两个对象的对焦位置。具体地,由前可知,在序列测定时,在第一预设轨道43上的两个FOV之间特别是相邻FOV的相对焦面位置保持不变。因此,可通过对第一对象42和第二对象44进行对焦,获得第一对象42和第二对象44的焦面坐标数据来确定所称的第一预定关系。利用该第一预定关系,可以无需对焦的获得在第一预设轨道43上的任意第三对象。
因此,作为例子说明,第一对象42和第二对象44可以分别为一个cycle(亦即相同时间段)中的第一预设轨道的起点和终点FOV,比如同一通道的同一行的两端FOV,如图5所示。第三对象46可为第一对象42和第二对像44之间的任一个或多个FOV。可以理解,基本以上规律,第一对象42和第二对象44还可为其它位置的FOV,第三对像46也无需位于第一对象42和第二对象44之间,只需基于两点确定一条直线(第一预定关系)的规则,选择第一预设轨道上的任意两个位置(对象),获取各位置对应的焦面位置,并根据各位置的焦面位置来获得对应于该第一预设轨道43的第一预定关系,可通过第一预定关系利用成像系统以无需对焦地获得第三对像的图像。在实际应用情景中,可建立坐标系来数字化/量化相对位置关系包括所称的焦面位置,比如在利用序列测定平台进行图像信号采集时,可以以xy表示第一/第二预设轨道所在的平面以及以z表示物镜光轴方向建立三维坐标系,各位置的焦面位置包括焦面Z值。
需要说明的,提及的cycle反映的是时间因素/图像采集周期的影响。一般地,在高精度成像系统中,例如在一个60倍物镜、景深200nm的显微系统中,第一/第二预设轨道的一次或多次来回的机械运动或者说承载第一/第二预设轨道的平台的一次或多次来回的机械运动带来的波动,很可能超出了景深,所以,较佳的,在利用上述或下述任一实施例的成像方法于精度较高的多次多对象连续成像的情形,对于位于相同预设轨道上的多个对象,假使不在同一次图像采集时间周期(例如处于不同的机械运动方向),重新对焦基于对焦数据重新拟合建立第一预定关系相对更准确更佳。本领域技术人员可以理解,在精度相对低的多对象连续成像情景中,由于景深较大,可以不用考虑机械往复运动造成的焦面位置偏差,亦即对于同一预设轨道上的多个对象,在不同的图像采集周期,可以利用前面任一图像采集周期已确定的第一和/或第二预定关系来成像。
使用上述预测策略后,Z值预测效果如图7-图9所示。
其中,图6至图8中C5曲线为相机真实拍摄结果所得的Z值曲线(实际对焦位置所形成的焦面线),仅使用相机追焦进行拍摄。C6曲线为预测的Z值曲线(预测对焦位置所形成的焦面线)。
图6展示无干扰状态下一个cycle的多个FOV的Z值预测,图7和图8展示有干扰且脱焦的情况下的Z值预测,在不干预的情况下,图7脱焦后可重新追焦成功,图8脱焦后无法重新追焦。
在某些实施方式中,对象包括位于第二预设轨道45不同位置的第四对象47和第五对象48,成像方法包括:使镜头104和第二预设轨道45依据第二预定关系相对运动,以利用成像系统、无需对焦地获得第五对象48的图像,第二预定关系通过第四对象47的焦面位置和第一预定关系确定,第二预设轨道45不同于第一预设轨道43。基于第一预设轨道43的第一预定关系以及第二预设轨道45上的任一对象的焦面位置,确定对应于第二预设轨道45的第二预定关系,利用该第二预定关系,可以无需对焦的获得该第二预设轨道45上的任一对象的清晰图像,如此,这样可获取更多个对象的清晰图像,满足了用户需求。
具体地,第二预设轨道45可为与第一预设轨道43相邻的轨道,在上述的实施例中,第二预设轨道45为与第一预设轨道43相邻的平行通道,第二预设轨道45可为线性轨道,第四对象47和第五对象48位于线性轨道的两个位置,例如,第四对象47位于线性轨道的一端,第 五对象48位于线性轨道的中间,可以理解,第五对象48的数量可为多个,多个第五对象48依次在第二预设轨道45排列,且第五对象48位于不同于第四对象47的位置。可以理解,在其它示例中,第二预设轨道45可为非线性轨道,例如曲线状轨道,曲线状轨道可看成是多条线段的拟合,第四对象47和第五对象48位于该曲线轨道中的相同线段。
在某些实施方式中,第二预定关系可为线性关系。
在一个实施例中,请结合图5,当第二预设轨道45是序列测序过程中所使用的芯片500的一个或多个通道52时,被成像的第五对象48是位于通道52内的一个或多个位置(FOV),拍照时,镜头和第二预设轨道可沿第二方向B相对移动,例如,镜头是固定的,镜头包括光轴,第二预设轨道45沿垂直于光轴方向运动。可以理解,在某些实施方式中,第二预设轨道45能够沿平行于光轴OP方向运动。可根据实际调整的需求,来对第二预设轨道45进行移动。
可以理解,镜头104和第二预设轨道45的相对运动的其它方式,也可参上述对镜头104和第一预设轨道43的相对运动的方式的解释说明,为避免冗余,在此不再详细展开。需要说明的是,在图5的示例中,第一预设轨道43和第二预设轨道45是芯片500上相邻的两个通道52,因此,在移动芯片500运动时,第一预设轨道43和第二预设轨道45作同步运动。
在某些实施方式中,第二预定关系的确定包括:利用成像系统对第四对象47进行对焦,确定第四坐标;依据第一预定关系和第四坐标建立第二预定关系,第四坐标反映第四对象47的焦面位置。如此,可预先确定第二预定关系,在执行其它对象的成像时,根据第二预定关系,就能利用成像系统、无需对焦地获得其它对象的清晰图像,简化了成像方法及提高了成像方法的效率。
具体地,根据上述的一个实施例,请结合图5,当被成像的第五对象48是序列测序所使用的芯片500的一个或多个位置时,第四对象47和多个第五对象48可位于芯片500的同一个通道内。
较佳地,第四对象47和多个第五对象48依次在第二预设轨道45排列。根据上述的一个实施例,第二方向B是从芯片500右至左的方向,也就是说,沿芯片500右至左的方向B,第四对象47和多个第五对象48依次在第二预设轨道45排列。在其它实施例中,第四对象47和第五对象48还可以是其它顺序在第二预设轨道45排列。
可以理解,第二预定关系的确定可参上述对第一预定关系确定的解释说明,为避免冗余,在此不再详细展开。
在某些实施方式中,成像方法包括:在获取第三对象46的图像后,使镜头104与第一预设轨道43和/或第二预设轨道45相对运动以利用成像系统、无需对焦地获取第五对象48的图像。如此,可在完成获取第一预设轨道43上的第三对象46的图像后,再获取第二预设轨道45上的第五对象48的图像,进而实现不同预设轨道的对象的成像。
具体地,在上述的实施例中,在完成第一预设轨道43上的一个或多个第三对象46的图像获取后,沿第三方向C,即垂直于通道52的延伸方向的方向,使镜头104与芯片500作相对运动,使镜头104位于第五对象48的上方,再根据第二预定关系利用成像系统、无需对焦地获取第五对象48的图像。在图示的实施方式中,第三方向C垂直于第一方向A及第二方向B。
进一步地,在图5所示的示例中,第一预设轨道43与第二预设轨道45从上而下交替间隔排列,在完成从上而下的第一个第一预设轨道43的一个或多个第三对象46的图像获取后,使 镜头104与芯片500作相对运动,使镜头104位于第一个第二预设轨道45的第五对象48的上方,再获取第一个第二预设轨道45的一个或多个第五对象48的图像。之后,使镜头104与芯片500作相对运动,使镜头104位于第二个第一预设轨道43的第三对象46的上方,再获取第二个第一预设轨道43的第三对象46的图像,直至完成所有第一预设轨道43和第二预设轨道45上的对象的清晰图像获取。
综上,由于待获取图像的对象的焦面位置(如Z值)是根据第一预定关系或第二预定关系预测的,因此,对其它FOV成像时,是以无需对焦地进行,这提高了成像效率和准确率。进一步地,这样的确定方法应用在同一区域、类似区域拍照过程中,可实现多对象的快速连续的图像采集。也可以在连续摄像过程中省略对焦过程,从而实现快速的扫描摄像。进一步地,配合相机的自动追焦系统,使用焦面预测技术,可以获得更好的图片质量,且能够解决遇到干扰脱焦后相机无法重新追焦的情况。在更广泛的意义上来说,相机使用焦面预测技术能够使得相机具备一定智能,根据先验知识辅助对焦过程,能够快速实现对焦过程,甚至省略对焦过程。尤其在摄像过程中,这种智能具备更为重要的拓展应用。
在某些实施方式中,成像系统包括成像装置102和载台,成像装置102包括镜头104和对焦模组106,镜头104包括光轴OP,镜头104能够沿光轴OP方向运动,第一预设轨道43和/或第二预设轨道45位于载台上。
以下,以具体的实施例来说明本发明确定第一预定关系或第二预定关系时的对焦过程。需要指出的是,除非特别说明,不同实施例中所用到的名称相同的元件应限于所在实施例的解释说明,而不应将不同实施例中名称相同的元件作交叉理解或混淆理解。
实施例一
请参图9-图11,对焦包括以下步骤:(a)利用对焦模组发射光至对象上;(b)使镜头移动到第一设定位置;(c)使镜头从第一设定位置以第一设定步长向对象移动并判断对焦模组是否接收到对象反射的光;(d)在对焦模组接收到对象反射的光时,将镜头从当前位置移动到第二设定位置,第二设定位置位于第一范围内,第一范围是包括当前位置的、允许镜头沿光轴方向移动的一个范围;(e)使镜头从第二设定位置以第二设定步长移动,在每步位置利用成像装置获得对象的图像,第二设定步长小于第一设定步长;(f)对对象的图像进行评估,依据获得的图像评估结果,实现对焦。
利用上述成像方法,能够快速准确地找到目标物体清晰成像的平面,即清晰平面/清晰面。该方法特别适用于不易找到清晰平面的包含精密光学系统的设备,例如带有高倍数镜头的光学检测设备。如此,可降低成本。
具体地,在上述对焦步骤中,对象为所需获取焦面位置的对象,例如,若需要确定第一预定关系,可在第一预设轨道选择两个对象,并先后或同时对位于第一预设轨道43的两个对象进行对焦,获取两组焦面位置数据,其中一个作为第一对象42的焦面位置数据,另一个作为第二对象44的焦面位置数据;若需要确定第二预定关系,可在第二预设轨道选择一个对象进行对焦,获取该对象的焦面位置数据,作为第四对象47的焦面位置数据,以结合第一预定关系确定所称的第二预定关系。
请参图10和图11,在本发明实施例中,对象为在序列测定中所应用的样品300的多个位置(FOV),具体地,当确定第一预定关系时,进行对焦的对象可作为第一对象或第二对象, 当确定第二预定关系时,进行对焦的对象可作为第四对象或第五对象。样品300包括承载装置200和位于承载装置的待测样品302,待测样品302为生物分子,如核酸等,镜头104位于承载装置200的上方。承载装置200具有前面板202和后面板(下面板),各面板均具有两个表面,待测样品302连接在下面板的上表面上,即待测样品302位于前面板202的下表面204下方。在本发明实施例中,由于成像装置102为采集待测样品302的图像,待测样品302也就是拍照时所对应的位置(FOV),而待测样品302位于承载装置200的前面板202下表面204下方,在对焦过程开始时,镜头104的移动是为了找到待测样品302所在的介质分界面204,以提高成像装置102的采集清晰图像的成功率。在本发明实施例中,待测样品302为溶液,承载装置200的前面板202为玻璃,承载装置200与待测样品302的介质分界面204为承载装置200的前面板202的下表面204,即玻璃与液体两种介质的分界面。成像装置102所需采集图像的待测样品302位于在前面板202的下表面204之下,此时再通过成像装置102所采集的图像来判别寻找待测样品302清晰成像的清晰面,此过程可称为对焦。在一个例子中,前面板202的厚度为0.175mm。
在其它实施例中,承载装置200可为玻片,待测样品302置于玻片上,或者待测样品302夹设于两片玻片中。在另一其它实施例中,承载装置200可为反应装置,例如,上下有承载面板的类似于三明治结构的芯片,待测样品302设置于芯片上。
在本实施例中,请参图11,成像装置102包括显微镜107和相机108,镜头104包括显微镜的物镜110和相机镜头112,对焦模组106可通过二向色分束器114(dichroic beam splitter)与相机镜头112固定在一起,二向色分束器114位于相机镜头112与物镜110之间。二向色分束器114包括双C型分束器(dual c-mount splitter)。二向色分束器114可反射对焦模组106发射的光至物镜110并能够让可见光穿透并经相机镜头112进入相机108内,如图11所示。
在本发明实施例中,镜头104的移动是沿光轴OP移动。镜头104的移动可指物镜110的移动,镜头104的位置可指物镜110的位置。在其它实施例中,可选择移动镜头104的其它透镜来实现对焦。另外,显微镜107还包括位于物镜110和相机108之间的镜筒透镜111(tube lens)。
在本实施例中,载台能够带动样品200在垂直于镜头104的光轴OP(如Z轴)方向的平面移动(如XY平面),和/或能够带动样品300沿镜头104的光轴OP(如Z轴)方向移动。
在其它实施例中,载台带动样品300移动的平面非垂直于光轴OP,即样品的运动平面与XY平面夹角非0,该成像方法仍旧适用。
另外,成像装置102也能够驱动物镜110沿镜头104的光轴OP方向移动以进行对焦。在一些例子中,成像装置102利用步进马达或音圈马达等驱动件来驱动物镜110移动。
在本实施例中,在建立坐标系时,如图10所示,可将物镜110、载台和样品300的位置设置在Z轴的负轴上,第一设定位置可为Z轴的负轴上的坐标位置。可以理解,在其它实施方式中,也可根据实际情况对坐标系与相机和物镜110的关系进行调整,在此不做具体限定。
在一个例子中,成像装置102包括全内反射荧光显微镜,物镜110为60倍放大,第一设定步长S1=0.01mm。如此,第一设定步长S1较合适,因S1太大会跨过可接受的对焦范围,S1太小会增加时间开销。
在对焦模组106没有接收到对象反射的光时,则使镜头104以第一设定步长向样品300和对象继续移动。
在本实施例中,成像系统可应用于序列测定系统,或者说,序列测定系统包括成像系统。
在本实施例中,以当前位置为基准,第一范围包括相对的第一区间和第二区间,定义第二区间更靠近样品,步骤(e)包括:(i)当第二设定位置位于第二区间时,将镜头从第二设定位置向远离对象的方向移动,在每步位置利用成像装置对对像进行图像采集;或者(ii)当第二设定位置位于第一区间时,将镜头从第二设定位置向靠近对象的方向移动,在每步位置利用成像装置对对象进行图像采集。如此,可根据第二设定位置的具体位置来对镜头的移动进行控制,能够快速采集到所需的图像。
具体地,在一个例子中,可将当前位置作为原点oPos并沿镜头的光轴方向建立坐标轴Z1,第一区间为正区间,第二区间为负区间。正负区间的范围±rLen,也就是说,第一范围是[oPos+rLen,oPos-rLen]。第二设定位置位于负区间且第二设定位置为(oPos–3*r0)。r0表示第二设定步长。成像装置在(oPos–3*r0)处开始进行图像采集并向远离对象的方向移动。
需要说明的是,在上述例子中建立的坐标轴Z1与图10的Z轴重合,且第一范围位于Z轴的负区间。这样可简化成像方法的控制,例如,只需要知道Z轴的原点与原点oPos之间的位置关系,便可知道镜头在坐标轴Z1的位置与在Z轴的位置的对应关系。
在本实施例中,步骤(f)包括:比较图像评估结果与预设条件,若图像评估结果满足预设条件,保存与图像对应的镜头104的位置;若图像评估结果不满足预设条件,将镜头104移动至第三设定位置,第三设定位置位于第一范围中的不同于第二设定位置所在区间的另一区间,即启动反向拍照调焦。例如,进行步骤(e)的(i)部分的过程中,图像评估结果均不满足预设条件;将镜头移104动至第三设定位置,相当于将镜头移动到要进行步骤(e)的(ii)部分的起始位置,进而进行反向拍照调焦,即进行步骤(e)的(ii)部分过程。如此,在第一范围内搜寻图像的对焦位置,有效提高了成像方法的效率。
具体地,参照上述实施例的例子,第二设定位置位于负区间的(oPos–3*r0),镜头从第二设定位置向上移动,成像装置102在每步位置进行图像采集,若图像评价结果不满足预设条件,则将镜头104移动至位于正区间的第三设定位置,例如,第三设定位置为(oPos+3*r0),然后成像装置102从(oPos+3*r0)处开始进行图像采集并向靠近对象的方向移动,并依据获得的图像评估结果,实现对焦。在图像评估结果满足预设条件时,保存与图像对应的镜头104的当前位置作为保存位置,可使得在序列测定反应进行拍照时,成像装置102能够输出清晰的图像。
在某些实施例中,图像评估结果包括第一评估值和第二评估值,第二设定步长包括粗步长和细步长,步骤(f)包括:镜头以粗步长移动直至相应位置的图像的第一评估值不大于第一阈值,镜头104换以细步长继续移动至相应位置的图像的第二评估值为最大,并保存与第二评估值为最大时的图像对应的镜头104的位置。如此,粗步长可使镜头104快速接近对焦位置,细步长可保证镜头104能到达对焦位置。
具体地,与最大第二评估值的图像对应的镜头104的位置可作为对焦位置进行保存。在每步位置利用成像装置102进行图像采集时,对采集到的图像计算第一评估值和第二评估值。
在一个例子中,在序列测定过程中,对象上带有光学可检测标记,利如荧光标记,荧光分子在特定波长激光照射下能够被激发发出荧光,成像装置102采集到的图像包括可能与荧光分子所在位置相对应的亮斑。可以理解,当镜头104位于对焦位置时,在所采集到的图像中,与荧光分子所在位置相对应的亮斑的尺寸较小且亮度较高;当镜头104位于非对焦位置时,在所 采集到的图像中,与荧光分子所在位置相对应的亮斑的尺寸较大且亮度较低。
在本实施例中,利用图像上的亮斑的大小和亮斑的强度来评估该图像。
例如,利用第一评估值来反映图像的亮斑大小;在一个示例中,第一评估值是通过统计图像上的亮斑的连通域大小而确定的,定义大于该图像的平均像素值的相连像素点(pixels connectivity)为一个连通域(连通区域,connected component)。第一评估值的确定例如可通过,计算各个亮斑的对应的连通域的大小,取亮斑的连通域大小的平均值代表该图像一个特性,作为该图像的第一评估值;又例如,可将各个亮斑对应的连通域大小按从小到大排序,取50、60、70、80或90分位点的连通域大小作为该图像的第一评估值。
在一个示例中,一个所称图像的亮斑对应的连通域大小Area=A*B,A表示以该亮斑对应的矩阵的中心为中心的所在行的连通域大小,B表示以该亮斑对应的矩阵的中心为中心的所在列的连通域大小。定义亮斑对应的矩阵是奇数行和奇数列构成的矩阵k1*k2,包含k1*k2个像素点。
在一个例子中,先将图像进行二值化处理,将图像转成数字矩阵再进行连通域大小的计算。例如,以该图像的平均像素值作为基准,不小于平均像素值的像素点记为1,小于该平均像素值的像素点标为0,如图12所示。在图12中,加粗加大的表示亮斑对应的矩阵的中心,粗线框表示3*3矩阵。标记为1的相连的像素点形成一个连通域,该亮斑对应的连通域的大小为A*B=3*6。
所称的第一阈值可以根据经验或者先验数据来设定。在一个示例中,第一评估值反映图像上亮斑的大小,发明人观察到,从靠近清晰面到远离清晰面的过程中,连通域Area先变小后变大,发明人基于多次找到清晰面的对焦过程中的Area数值大小及变化规律,确定第一阈值。在一个示例中,第一阈值设定为260。需要指出的是,第一阈值与粗步长、细步长大小设置可具有的关联:第一阈值的数值大小能够不至于走一个粗步长就跨过成像装置对对象成像时的焦面。
在某些实施例中,第二评估值或者第三评估值是通过统计图像的亮斑的分值而确定,一个图像的亮斑的分值Score=((k1*k2-1)CV-EV)/((CV+EV)/(k1*k2)),CV表示亮斑对应的矩阵的中心像素值,EV表示亮斑对应的矩阵的非中心像素值的总和。如此,可以确定第二评估值或第三评估值。
具体地,判断出图像的亮斑后,可将图像的所有亮斑的Score值按升序排。当亮斑的数量大于预设数量时,例如,预设数量是30,亮斑数量为50,第二评估值可取50、60、70、80或90分位数的Score值,如此,可排除掉50%、60%、70%、80%或90%的质量相对不佳的亮斑的干扰;一般地,认为中心与边缘强度/像素值差异大且汇聚的亮斑为与待检分子相对应的亮斑。待检分子可表示核酸检测时,与目标检测对象对应的核酸分子。
当亮斑数量小于预设数量时,例如亮斑数量为10小于预设数量,这样亮斑数量较少不具有统计意义,则取Score值最大的亮斑来代表该图像,即取一百分位数Score值为第三评估值。
在本实施例中,图像评估结果包括第一评估值、第二评估值和第三评估值,图像包括多个像素;预设条件为,图像上的亮斑的数量大于预设值,相应位置的图像的第一评估值不大于第一阈值,且相应位置的图像的第二评估值在相应位置的图像的前后各N个图像的第二评估值中是最大的;或预设条件为,图像上的亮斑的数量小于预设值,相应位置的图像的第一评估值不 大于第一阈值,且相应位置的图像的第三评估值在当前图像的前后各N个图像的第三评估值中是最大的。如此,根据图像的亮斑数量采用不同的评估值进行评估,使得成像方法的对焦更准确。
具体地,在一个例子中,第一评估值可为上述实施方式中的图像的亮斑对应的连通域大小。第二评估值和第三评估值为不同示例中,依据亮斑数目具有或不具有统计意义而取的不同Score分位数,例如可分别为非一百分位数的Score值和一百分位数的Score值。
在一个例子中,进行的是单分子测序,采集的图像上的亮斑可能来自待测样品带有的一个或几个光学可检测标记分子,也可能来自于其它干扰。
在本实施例中,对亮斑进行检测,检测对应/来自于标记分子的亮斑,例如可采用k1*k2矩阵对亮斑进行检测。具体地,利用以下方法检测图像上的亮斑:
利用k1*k2矩阵对图像进行亮斑检测,包括判定矩阵的中心像素值不小于矩阵非中心任一像素值的矩阵对应一个亮斑,k1和k2均为大于1的奇数,k1*k2矩阵包含k1*k2个像素点。
方法基于荧光所产生的信号的亮度/强度与背景亮度/强度的差异,能够简单快速的检测到来自标记分子信号的信息。在某些实施例中,矩阵的中心像素值大于第一预设值,矩阵非中心任一像素值大于第二预设值。
第一预设值和第二预设值可以根据经验或者一定量的正常图像的正常亮斑的像素/强度数据来设定,所称的“正常图像”、“正常亮斑”可以是成像系统在清晰面位置获得的图像且肉眼看起来正常,如图像看起来清晰、背景较干净,各亮斑大小及亮度较均匀等。在一个实施例中,第一预设值和第二预设值与该图像的平均像素值相关。例如,设定第一预设值为该图像的平均像素值的1.4倍,第二预设值为该图像的平均像素值的1.1倍,能够排除干扰、获得来自于标记的亮斑检测结果。
具体地,在一个示例中,图像是彩色图像,彩色图像的一个像素点具有三个像素值,可以将彩色图像转化为灰度图像,再进行图像检测,以降低图像检测过程的计算量和复杂度。可选择但不限于利用浮点算法、整数方法、移位方法或平均值法等将非灰度图像转换成灰度图像。当然,也可以直接检测彩色图像,上述涉及的像素值的大小比较可看成是三维值或者具有三个元素的数组的大小比较,可根据经验及需要自定义多个多维值的相对大小,例如当三维值a中的任两维数值比三维值b的对应维度的数值大,可认为三维值a大于三维值b。
在另一个示例中,图像是灰度图像,灰度图像的像素值同灰度值。所以,图像的平均像素值为图像的平均灰度值。
在一个例子中,第一阈值为260,预设数量为30,N=2。也就是说,当相应位置的图像的第一评估值不大于260且亮斑的数量大于30时,统计获得相应位置的图像的第二评估值,确定第二评估值最大的图像的位置为清晰面位置,且该位置前后均存在2个符合以下情况的位置:对应的图像的第二评估值大于零。当相应位置的图像的第一评估值不大于260且亮斑的数量小于30时,统计相应位置的图像的第三评估值,并找到第三评估值最大的图像的位置为清晰面位置,且该位置前后均有2个满足以下情况的位置:对应的图像的第三评估值大于零。
若没有找到满足上述条件的图像,则判定图像评估结果不满足预设条件。
在一个例子中,k1=k2=3,那么3*3矩阵中有9个像素,EV为非中心8个像素值的总和。
在本实施例中,若依据图像评估结果不能完成对焦,将镜头沿垂直于光轴方向移动到对象 的下一个图像采集区域(FOV)进行对焦。如此,可从其它对象进行重新对焦,避免在不能对焦的当前对象一直对焦下去,节省了时间。
在本实施例中,成像方法还包括:当对焦未成功的当前对象的数量大于预设数量时,提示对焦失败。如此,可人工排除对焦失败原因,避免一直对焦下去,从而节省了时间。具体地,在这种情况下,可能是对象放置的位置不对或成像装置的故障等原因。提示对焦失败后,可人工排除对焦失败原因。在一个例子中,预设数量为3个,也就是说,当对焦未成功的当前对象的数量大于3时,提示对焦失败。提示对焦失败的方式可以是以显示图像,文字,播放声音等方式进行提示。
在本实施例中,成像方法还包括:判断镜头的位置是否超出第一范围,在镜头的位置超出第一范围时,退出对焦。如此,在镜头的位置超出第一范围时退出对焦,可避免对焦时间过长和增加功耗。
具体地,在上述实施例的例子中,第一范围为[oPos+rLen,oPos-rLen]。
在本实施例中,在镜头104移动时,判断镜头104的当前位置是否超出第四设定位置;在镜头104的当前位置超出第四设定位置时,停止移动镜头104。如此,第一设定位置与第四设定位置可限定镜头104的移动范围(第一范围),可使镜头104在无法对焦成功时停止移动,避免了资源的浪费或者设备的损坏,或可使镜头104在无法对焦成功时进行重新对焦,提高了成像方法的自动化。
例如在全内反射成像系统中,为能快速找到介质分界面,会调整设置使镜头104的移动范围在能满足实施该方案的情况下尽量小。例如,在物镜为60倍的全内反射成像装置上,按照光路特性以及经验总结,镜头104的移动范围可设置为200μm±10μm或者为[190μm,250μm]。
在本实施例中,依据已定的移动范围以及第四设定位置和第一设定位置中任一位置的设定,可确定另一设定位置。在一个例子中,设定第四设定位置为反应装置200前面板202的上表面205最低处再往下一个景深大小的位置,设定镜头104的移动范围为250μm,如此,第一设定位置即确定。在本发明示例中,下一个景深大小的位置所对应的坐标位置为沿Z轴负方向变小的位置。
具体地,在本实施例中,移动范围为Z轴的负轴上的一个区间。在一个例子中,第一设定位置为nearlimit,第四设定位置为farlimit,nearlimit和farlimit对应的坐标位置均位于Z轴的负轴上,nearlimit=-6000μm,farlimit=-6350μm。nearlimit和farlimit之间限定的移动范围的大小为350μm。因此,当镜头104的当前位置对应的坐标位置小于第四设定位置对应的坐标位置时,判断镜头104的当前位置超出第四设定位置。在图10中,farlimit的位置为反应装置200前面板202的上表面205最低处下一个景深L的位置。景深L为镜头104的景深大小。
需要指出的是,在其它实施例中,第一设定位置和/或第四设定位置所对应的坐标位置可根据实际情况作具体设定,在此不作具体限定。
在本实施例中,对焦模组106包括光源116和光传感器118,光源116用于发射光到对象上,光传感器118用于接收对象反射的光。如此,可实现对焦模组106的发光和接收光。
具体地,在本发明实施例中,光源116可为红外光源116,光传感器118可为光电二极管(photo diode),如此,成本低,检测的准确率高。光源116发射的红外光经二向色分束器的反射进入物镜110,并经物镜110投射到样品300和对象。对象可反射经物镜110投影的红外光。 在本发明实施例中,当样品300包括承载装置200和待测样品302时,接收的对象反射的光是由承载装置200的前面板的下表面204反射的光。
对象反射的红外光能否进入物镜110并被光传感器118接收到,主要取决于物镜110与对象的距离。因此,在判断对焦模组106接收到对象反射的红外光时,可判断物镜110与对象的距离处于光学成像合适范围中,能够用于成像装置102的成像。在一个例子中,距离为20-40μm。
此时,使镜头104以小于第一设定步长的第二设定步长移动,使得成像系统能够在更小的范围内寻找镜头104的最佳成像位置。
在本实施例中,请参图13,在对焦模组106接收到对象反射的光时,成像方法还包括步骤:g,使镜头104以小于第一设定步长且大于第二设定步长的第三设定步长向对象移动,并根据对焦模组106接收到的光的光强计算出第一光强参数,判断第一光强参数是否大于第一设定光强阈值;在第一光强参数大于第一设定光强阈值时,进行步骤(d)。如此,通过第一光强参数和第一设定光强阈值的比较,可排除与介质分界面反射光对比非常弱的光信号对调焦/对焦产生的干扰。
在第一光强参数不大于第一设定光强阈值时,则使镜头104以第三设定步长向对象继续移动。
在本实施例中,对焦模组106包括两个光传感器118,两个光传感器118用于接收对象反射的光,第一光强参数为两个光传感器118接收到的光的光强的平均值。如此,通过两个光传感器118接收到的光的光强的平均值来计算第一光强参数,使得排除弱的光信号更加准确。
具体地,第一光强参数可设置为SUM,即SUM=(PD1+PD2)/2,PD1和PD2分别表示两个光传感器118接收到的光的光强。在一个例子中,第一设定光强阈值nSum=40。
在一个例子中,第三设定步长S2=0.005mm。可以理解,在其它例子中,第三设定步长也可采用其它数值,在此不作具体限定。
实施例二
需要指出的是,本实施例中,成像系统的结构图可采用实施例一的成像系统的结构图,可以理解地,实施例二的对焦方法与实施例一的对焦方法或对焦逻辑不同,但所用到的成像系统的结构基本相同。
请参图10、图11和图14,对焦包括以下步骤:S11,利用对焦模组106发射光至对象上;S12,使镜头104移动到第一设定位置;S13,使镜头104从第一设定位置以第一设定步长向对象移动并判断对焦模组106是否接收到由对象反射的光;在对焦模组106接收到由对象反射的光时,S14,使镜头104以小于第一设定步长的第二设定步长移动并利用成像装置102对对象进行图像采集,并判断成像装置102所采集到的图像的锐度值是否达到设定阈值;在图像的锐度值达到设定阈值时,S15,保存镜头104的当前位置作为保存位置。
利用上述对焦方法,能够快速准确地找到目标物体清晰成像的平面,即清晰平面/清晰面。该方法特别适用于不易找到清晰平面的包含精密光学系统的设备,例如带有高倍数镜头的光学检测设备。
具体地,在上述对焦步骤中,对象为所需获取焦面位置的对象,例如,若需要确定第一预定关系,可在第一预设轨道选择两个对象,并先后或同时对位于第一预设轨道43的两个对象 进行对焦,获取两组焦面位置数据,其中一个作为第一对象42的焦面位置数据,另一个作为第二对象44的焦面位置数据;若需要确定第二预定关系,可在第二预设轨道选择两个对象,可先后或同时对位于第二预设轨道45的两个对象进行对焦,获取两组焦面位置数据,其中一个作为第四对象47的焦面位置数据,另一个作为第五对象48的焦面位置数据。
请参图10,在本发明实施例中,对象为在序列测定中所应用的样品300的多个位置(FOV),具体地,当确定第一预定关系时,进行对焦的对象可作为第一对象或第二对象,当确定第二预定关系时,进行对焦的对象可作为第四对象或第五对象。样品300包括承载装置200和位于承载装置的待测样品302,待测样品302为生物分子,如核酸等,镜头104位于承载装置200的上方。承载装置200具有前面板202和后面板(下面板),各面板均具有两个表面,待测样品302连接在下面板的上表面上,即待测样品302位于前面板202的下表面204下方。在本发明实施例中,由于成像装置102为采集待测样品302的图像,待测样品302也就是拍照时所对应的位置(FOV),而待测样品302位于承载装置200的前面板202下表面204下方,在对焦过程开始时,镜头104的移动是为了找到待测样品302所在的介质分界面204,以提高成像装置102的采集清晰图像的成功率。在本发明实施例中,待测样品302为溶液,承载装置200的前面板202为玻璃,承载装置200与待测样品302的介质分界面204为承载装置200的前面板202的下表面204,即玻璃与液体两种介质的分界面。成像装置102所需采集图像的待测样品302位于在前面板202的下表面204之下,此时再通过成像装置102所采集的图像来判别寻找待测样品302清晰成像的清晰面,此过程可称为对焦。在一个例子中,前面板202的厚度为0.175mm。
在本实施例中,承载装置200可为玻片,待测样品302置于玻片上,或者待测样品302夹设于两片玻片中。在某些实施方式中,承载装置200可为反应装置,例如,上下有承载面板的类似于三明治结构的芯片,待测样品302设置于芯片上。
在本实施例中,请参图11,成像装置102包括显微镜107和相机108,镜头104包括显微镜的物镜110和相机镜头112,对焦模组106可通过二向色分束器114(dichroic beam splitter)与相机镜头112固定在一起,二向色分束器114位于相机镜头112与物镜110之间。二向色分束器114包括双C型分束器(dual c-mount splitter)。二向色分束器114可反射对焦模组106发射的光至物镜110并能够让可见光穿透并经相机镜头112进入相机108内,如图11所示。
在本发明实施例中,镜头104的移动是沿光轴OP移动。镜头104的移动可指物镜110的移动,镜头104的位置可指物镜110的位置。在其它实施例中,可选择移动镜头104的其它透镜来实现对焦。另外,显微镜107还包括位于物镜110和相机108之间的镜筒透镜111(tube lens)。
在本实施例中,载台能够带动样品200在垂直于镜头104的光轴OP(如Z轴)的平面移动(如XY平面),和/或能够带动样品300沿镜头104的光轴OP(如Z轴)移动。
在其它实施例中,载台带动样品300移动的平面非垂直于光轴OP,即样品的运动平面与XY平面夹角非0,该成像方法仍旧适用。
另外,成像装置102也能够驱动物镜110沿镜头104的光轴OP移动以进行对焦。在一些例子中,成像装置102利用步进马达或音圈马达等驱动件来驱动物镜110移动。
在本实施例中,在建立坐标系时,如图10所示,可将物镜110、载台和样品300的位置设置在Z轴的负轴上,第一设定位置可为Z轴的负轴上的坐标位置。可以理解,在其它实施方式中,也可根据实际情况对坐标系与相机和物镜110的关系进行调整,在此不做具体限定。
在一个例子中,成像装置102包括全内反射荧光显微镜,物镜110为60倍放大,第一设定步长S1=0.01mm。如此,第一设定步长S1较合适,因S1太大会跨过可接受的对焦范围,S1太小会增加时间开销。
在对焦模组106没接收到由对象反射的光时,则使镜头104以第一设定步长沿光轴OP向样品300和对象继续移动。
在本实施例中,在图像的锐度值没达到设定阈值时,则使镜头104以第二设定步长沿光轴OP继续移动。
在本实施例中,成像系统可应用于序列测定系统,或者说,序列测定系统包括成像系统。
在本实施例中,在镜头104移动时,判断镜头104的当前位置是否超出第二设定位置;在镜头104的当前位置超出第二设定位置时,停止移动镜头104或者进行对焦步骤。如此,第一设定位置与第二设定位置可限定镜头104的移动范围,可使镜头104在无法对焦成功时停止移动,避免了资源的浪费或者设备的损坏,或可使镜头104在无法对焦成功时进行重新对焦,提高了成像方法的自动化。
例如在全内反射成像系统中,为能快速找到介质分界面,会调整设置使镜头104的移动范围在能满足实施该方案的情况下尽量小。例如,在物镜为60倍的全内反射成像装置上,按照光路特性以及经验总结,镜头104的移动范围可设置为200μm±10μm或者为[190μm,250μm]。
在本实施例中,依据已定的移动范围以及第二设定位置和第一设定位置中任一位置的设定,可确定另一设定位置。在一个例子中,设定第二设定位置为反应装置200前面板202的上表面205最低处再往下一个景深大小的位置,设定镜头104的移动范围为250μm,如此,第一设定位置即确定。在本发明示例中,下一个景深大小的位置所对应的坐标位置为沿Z轴负方向变小的位置。
具体地,在本发明实施例中,移动范围为Z轴的负轴上的一个区间。在一个例子中,第一设定位置为nearlimit,第二设定位置为farlimit,nearlimit和farlimit对应的坐标位置均位于Z轴的负轴上,nearlimit=-6000μm,farlimit=-6350μm。nearlimit和farlimit之间限定的移动范围的大小为350μm。因此,当镜头104的当前位置对应的坐标位置小于第二设定位置对应的坐标位置时,判断镜头104的当前位置超出第二设定位置。在图10中,farlimit的位置为反应装置200前面板202的上表面205最低处下一个景深L的位置。景深L为镜头104的景深大小。
需要指出的是,在其它实施方式中,第一设定位置和/或第二设定位置所对应的坐标位置可根据实际情况作具体设定,在此不作具体限定。
在本实施例中,对焦模组106包括光源116和光传感器118,光源116用于发射光到对象上,光传感器118用于接收由对象反射的光。如此,可实现对焦模组106的发光和接收光。
具体地,在本发明实施例中,光源116可为红外光源116,光传感器118可为光电二极管(photo diode),如此,成本低,检测的准确率高。光源116发射的红外光经二向色分束器的反射进入物镜110,并经物镜110投射到样品300和对象。对象可反射经物镜110投影的红外光。在本发明实施例中,当样品300包括承载装置200和待测样品302时,接收的对象反射的光是由承载装置200的前面板的下表面204反射的光。
对象反射的红外光能否进入物镜110并被光传感器118接收到,主要取决于物镜110与对象的距离。因此,在判断对焦模组106接收到对象反射的红外光时,可判断物镜110与对象的 距离处于光学成像合适范围中,能够用于成像装置102的成像。在一个例子中,距离为20-40μm。
此时,使镜头104以小于第一设定步长的第二设定步长移动,使得成像系统能够在更小的范围内寻找镜头104的最佳成像位置。
在本实施例中,图像的锐度值可作为图像对焦的评价值(evaluation value)。在一个例子中,判断成像装置102采集的图像的锐度值是否达到设定阈值可通过图像处理的爬山算法。通过计算物镜110在每个位置时成像装置102所输出的图像的锐度值来判断锐度值是否达到锐度值波峰处的最大值,进而判断镜头104是否到达成像装置102成像时的清晰面所在的位置。可以理解,在其它实施方式中,也可利用其它图像处理的算法来判断锐度值是否达到波峰处的最大值。
在图像的锐度值达到设定阈值时,保存镜头104的当前位置作为保存位置,可使得在序列测定反应进行拍照时,成像装置102能够输出清晰的图像。
在本实施例中,请参图15,在对焦模组106接收到由对象反射的光时,对焦还包括步骤:S16,使镜头104以小于第一设定步长且大于第二设定步长的第三设定步长向对象移动,并根据对焦模组106接收到的光的光强计算出第一光强参数,判断第一光强参数是否大于第一设定光强阈值;在第一光强参数大于第一设定光强阈值时,进行步骤S14。如此,通过第一光强参数和第一设定光强阈值的比较,可排除与介质分界面反射光对比非常弱的光信号对调焦/对焦产生的干扰。
在第一光强参数不大于第一设定光强阈值时,则使镜头104以第三设定步长沿光轴OP向对象继续移动。
在本实施例中,对焦模组106包括两个光传感器118,两个光传感器118用于接收由对象反射的光,第一光强参数为两个光传感器118接收到的光的光强的平均值。如此,通过两个光传感器118接收到的光的光强的平均值来计算第一光强参数,使得排除弱的光信号更加准确。
具体地,第一光强参数可设置为SUM,即SUM=(PD1+PD2)/2,PD1和PD2分别表示两个光传感器118接收到的光的光强。在一个例子中,第一设定光强阈值nSum=40。
在一个例子中,第三设定步长S2=0.005mm。可以理解,在其它例子中,第三设定步长也可采用其它数值,在此不作具体限定。
在另一实施例中,请参图16,在对焦模组106接收到由对象反射的光时,方法还包括以下步骤:S16,使镜头104以小于第一设定步长且大于第二设定步长的第三设定步长向对象移动,并根据对焦模组106接收到的光的光强计算出第一光强参数,判断第一光强参数是否大于第一设定光强阈值;在第一光强参数大于第一设定光强阈值时,S17,使镜头104以小于第三设定步长且大于第二设定步长的第四设定步长向对象移动,并根据对焦模组106接收到的光的光强计算出第二光强参数,判断第二光强参数是否小于第二设定光强阈值;在第二光强参数小于第二设定光强阈值时,进行步骤S14。如此,通过第一光强参数和第一设定光强阈值的比较,可排除与介质分界面反射光对比非常弱的光信号对调焦/对焦产生的干扰;及通过第二光强参数和第二设定光强阈值的比较,可排除非介质分界面位置的强反射光信号,比如物镜110油面/空气反射的光信号对调焦/对焦产生的干扰。
在第一光强参数不大于第一设定光强阈值时,则使镜头104以第三设定步长沿光轴OP向对象继续移动。
在第二光强参数不小于第二设定光强阈值时,则使镜头104以第四设定步长沿光轴OP向 对象继续移动。
在一个例子中,第三设定步长S2=0.005mm,第四设定步长S3=0.002mm。可以理解,在其它例子中,第三设定步长和第四设定步长也可采用其它数值,在此不作具体限定。
在本实施例中,对焦模组106包括两个光传感器118,两个光传感器118用于接收由对象反射的光,第一光强参数为两个光传感器118接收到的光的光强的平均值,两个光传感器118接收到的光的光强具有第一差值,第二光强参数为第一差值与设定补偿值的差值。如此,通过两个光传感器118接收到的光的光强来计算第二光强参数,使得排除强反射的光信号更加准确。
具体地,第一光强参数可设置为SUM,即SUM=(PD1+PD2)/2,PD1和PD2分别表示两个光传感器118接收到的光的光强。在一个例子中,第一设定光强阈值nSum=40。差值可设置为err,设定补偿值为offset,即err=(PD1-PD2)-offset。在理想状态下,第一差值可为零。在一个例子中,第二设定光强阈值nErr=10,offset=30。
在本实施例中,在使镜头104以第二设定步长移动时,判断镜头104的当前位置所对应的图案的第一锐度值是否大于镜头104的前一位置所对应的图像的第二锐度值;在第一锐度值大于第二锐度值且第一锐度值和第二锐度值之间的锐度差值大于设定差值时,使镜头104以第二设定步长继续向对象移动;在第一锐度值大于第二锐度值且第一锐度值和第二锐度值之间的锐度差值小于设定差值时,使镜头104以小于第二设定步长的第五设定步长继续向对象移动以使成像装置102所采集到的图像的锐度值达到设定阈值;在第二锐度值大于第一锐度值且第二锐度值和第一锐度值之间的锐度差值大于设定差值时,使镜头104以第二设定步长远离对象移动;在第二锐度值大于第一锐度值且第二锐度值和第一锐度值之间的锐度差值小于设定差值时,使镜头104以第五设定步长远离对象移动以使成像装置102所采集到的图像的锐度值达到设定阈值。如此,能够较准确地找到锐度值波峰处所对应的镜头104的位置,使成像装置所输出的图像清晰。
具体地,第二设定步长可作为粗调步长Z1,第五设定步长可作为细调步长Z2,并可设置粗调范围Z3。粗调范围Z3的设置可使图像的锐度值无法到达设定阈值时,能够停止镜头104的移动,节约了资源。
以镜头104的当前位置作为起点T,粗调范围Z3为调整范围,即在Z轴上的调整范围为(T,T+Z3)。先以步长Z1在(T,T+Z3)范围内使镜头104沿第一方向(如沿光轴OP向对象靠近的方向)移动,并比较在镜头104的当前位置时成像装置102所采集到的图像的第一锐度值R1与镜头104在前一位置时成像装置102所采集到的图像的第二锐度值R2。
当R1>R2且R1-R2>R0时,即说明图像的锐度值向设定阈值靠近且离设定阈值较远,使镜头104继续以步长Z1沿第一方向移动,以快速地向设定阈值靠近。
当R1>R2且R1-R2<R0时,即说明图像的锐度值向设定阈值靠近且离设定阈值较近,使镜头104以步长Z2沿第一方向移动,以较小的步长向设定阈值靠近。
当R2>R1且R2-R1>R0时,即说明图像的锐度值已跨过设定阈值且离设定阈值较远,使镜头104以步长Z1沿与第一方向相反的第二方向(如沿光轴OP远离对象的方向)移动,以快速地向设定阈值靠近。
当R2>R1且R2-R1<R0时,即说明图像的锐度值已跨过设定阈值且离设定阈值较近,使镜头104以步长Z2沿与第一方向相反的第二方向移动,以较小的步长向设定阈值靠近。
在本实施例中,在镜头104移动时,第五设定步长可进行调整以适应向设定阈值靠近时的步长不宜太大或太小。
在一个例子中,T=0,Z1=100,Z2=40,Z3=2100,调整范围为(0,2100)。需要说明的是,上述数值是用在成像装置102进行图像采集过程中对移动镜头104时所用的度量值,该度量值为光强相关。设定阈值可以理解为,对焦曲线的峰值或以峰值为中心的一个范围,或包含峰值的一个范围。
请参图5,本发明实施方式的一种成像系统(图未显示),用于对对象进行成像,成像系统包括镜头104和控制装置,对象包括位于第一预设轨道43不同位置的第一对象42、第二对象44和第三对象46,控制装置包括计算机可执行程序,执行计算机可执行程序包括上述任一实施方式的成像方法的步骤。
上述成像系统中,通过第一对象42和第二对象44的对焦位置来确定第一预定关系,对该第一预设轨道上的其它对象成像时,可根据该第一预定关系直接进行焦面预测,无需对焦的获取第三对象的清晰图像,尤其适于对象的数量较多且希望快速连续获取这些对象的图像的情景,该成像系统的成像效率高,并且在成像系统自身追焦失败的情况下仍能准确确定后续对象的焦面位置,获取连续图像采集中的后续对象的图像信息,配合成像系统本身自带的追焦系统使用,能够挽救成像系统自带的追焦系统追焦失败后无法重新正常追焦的情况。
需要说明的是,上述任一实施方式和实施例中的对成像方法的技术特征和有益效果的解释和说明也适用于本实施方式的成像系统,为避免冗余,在此不再详细展开。
在某些实施方式中,第三对象46位于第一对象42和第二对象44之间。
在某些实施方式中,镜头104是固定的,镜头104包括光轴OP,第一预设轨道43能够沿垂直于或平行于光轴OP方向运动。
在某些实施方式中,第一预定关系的确定包括:
利用成像系统对第一对象42进行对焦,确定第一坐标;
利用成像系统对第二对象44进行对焦,确定第二坐标;
依据第一坐标和第二坐标建立第一预定关系,第一坐标反映第一对象42的焦面位置,第二坐标反映第二对象44的焦面位置。
在某些实施方式中,第一预设轨道43为线性或非线性轨道;和/或第一预定关系为线性关系。
在某些实施方式中,对象包括位于第二预设轨道45不同位置的第四对象47和第五对象48,控制装置用于:
使镜头104和第二预设轨道45依据第二预定关系相对运动,以利用成像系统、无需对焦地获得第五对象48的图像,第二预定关系通过第四对象47的焦面位置和第一预定关系确定,第二预设轨道45不同于第一预设轨道43。
在某些实施方式中,镜头104是固定的,镜头104包括光轴OP,第二预设轨道45能够沿垂直于或平行于光轴OP方向运动。
在某些实施方式中,第二预定关系的确定包括:
利用成像系统对第四对象47进行对焦,确定第四坐标;
依据第一预定关系和第四坐标建立第二预定关系,第四坐标反映第四对象47的焦面位置。
在某些实施方式中,控制装置用于:在获取第三对象46的图像后,使镜头104与第一预设轨道43和/或第二预设轨道45相对运动以利用成像系统、无需对焦地获取第五对象48的图像。
在某些实施方式中,成像系统包括成像装置102和载台,成像装置102包括镜头104和对焦模组106,镜头104包括光轴OP,镜头104能够沿光轴OP方向运动,第一预设轨道43和/或第二预设轨道45位于载台103上。
在某些实施方式中,控制装置用于执行以下步骤:
(a)利用对焦模组106发射光至对象上;
(b)使镜头104移动到第一设定位置;
(c)使镜头104从第一设定位置以第一设定步长向对象移动并判断对焦模组106是否接收到对象反射的光;
(d)在对焦模组106接收到对象反射的光时,将镜头104从当前位置移动到第二设定位置,第二设定位置位于第一范围内,第一范围是包括当前位置的、允许镜头104沿光轴OP方向移动的一个范围;
(e)使镜头104从第二设定位置以第二设定步长移动,在每步位置利用成像装置102获得对象的图像,第二设定步长小于第一设定步长;
(f)对对象的图像进行评估,依据获得的图像评估结果,实现对焦。
在某些实施方式中,以当前位置为基准,第一范围包括相对的第一区间和第二区间,定义第二区间更靠近对象,步骤(e)包括:
(i)当第二设定位置位于第二区间时,将镜头104从第二设定位置向远离对象的方向移动,在每步位置利用成像装置102获得对象的图像;或者
(ii)当第二设定位置位于第一区间时,将镜头104从第二设定位置向靠近对象的方向移动,在每步位置利用成像装置102获得对象的图像。
在某些实施方式中,步骤(f)包括:比较图像评估结果与预设条件,若图像评估结果满足预设条件,保存与图像对应的镜头104的位置;
若图像评估结果不满足预设条件,将镜头104移动至第三设定位置,第三设定位置位于第一范围中的不同于第二设定位置所在区间的另一区间。
在某些实施方式中,图像评估结果包括第一评估值和第二评估值,第二设定步长包括粗步长和细步长,步骤(f)包括:镜头104以粗步长移动直至相应位置的图像的第一评估值不大于第一阈值,镜头104换以细步长继续移动至相应位置的图像的第二评估值为最大,并保存与第二评估值为最大时的图像对应的镜头104的位置。
在某些实施方式中,图像评估结果包括第一评估值、第二评估值和第三评估值,图像包括多个像素;
预设条件为,图像上的亮斑的数量大于预设值,相应位置的图像的第一评估值不大于第一阈值,且相应位置的图像的第二评估值在相应位置的图像的前后各N个图像的第二评估值中是最大的;或
预设条件为,图像上的亮斑的数量小于预设值,相应位置的图像的第一评估值不大于第一阈值,且相应位置的图像的第三评估值在当前图像的前后各N个图像的第三评估值中是最大 的。
在某些实施方式中,成像系统包括亮斑检测模块,亮斑检测模块用于:
利用k1*k2矩阵对图像进行亮斑检测,包括判定矩阵的中心像素值不小于矩阵非中心任一像素值的矩阵对应一个亮斑,k1和k2均为大于1的奇数,k1*k2矩阵包含k1*k2个像素点。
在某些实施方式中,对应一个亮斑的矩阵的中心像素值大于第一预设值,矩阵非中心任一像素值大于第二预设值,第一预设值和第二预设值与图像的平均像素值相关。
在某些实施方式中,第一评估值通过统计图像的亮斑对应的连通域大小而确定的,一个图像的亮斑对应的连通域大小Area=A*B,A表示以亮斑对应的矩阵的中心为中心的所在行的连通域大小,B表示以亮斑对应的矩阵的中心为中心的所在列的连通域大小,定义大于图像的平均像素值的相连像素点为一个连通域。
在某些实施方式中,第二评估值和/或第三评估值通过统计图像的亮斑的分值而确定,一个图像的亮斑的分值Score=((k1*k2-1)CV-EV)/((CV+EV)/(k1*k2)),CV表示亮斑对应的矩阵的中心像素值,EV表示亮斑对应的矩阵的非中心像素值的总和。
在某些实施方式中,对焦模组106包括光源116和光传感器118,光源116用于发射光到对象上,光传感器118用于接收对象反射的光。
在某些实施方式中,在对焦模组106接收到对象反射的光时,控制装置还用于:
使镜头104以小于第一设定步长且大于第二设定步长的第三设定步长向对象移动,并根据对焦模组106接收到的光的光强计算出第一光强参数,判断第一光强参数是否大于第一设定光强阈值;
在第一光强参数大于第一设定光强阈值时,将镜头104从当前位置移动到第二设定位置。
在某些实施方式中,对焦模组106包括两个光传感器118,两个光传感器118用于接收对象反射的光,第一光强参数为两个光传感器118接收到的光的光强的平均值。
在某些实施方式中,在镜头104移动时,控制装置用于:判断镜头104的当前位置是否超出第四设定位置;
在镜头104的当前位置超出第四设定位置时,停止移动镜头104。
在某些实施方式中,控制装置用于:
利用对焦模组106发射光至对象上;
使镜头104移动到第一设定位置;
使镜头104从第一设定位置以第一设定步长向对象移动并判断对焦模组106是否接收到由对象反射的光;
在对焦模组106接收到由对象反射的光时,使镜头104以小于第一设定步长的第二设定步长移动并利用成像装置102对对像进行图像采集,并判断成像装置102所采集到的图像的锐度值是否达到设定阈值;
在图像的锐度值达到设定阈值时,保存镜头104的当前位置作为保存位置。
在某些实施方式中,对焦模组106包括光源116和光传感器118,光源116用于发射光到对象上,光传感器118用于接收由对象反射的光。
在某些实施方式中,在对焦模组106接收到由对象反射的光时,控制装置用于:
使镜头104以小于第一设定步长且大于第二设定步长的第三设定步长对象移动,并根据对 焦模组106接收到的光的光强计算出第一光强参数,判断第一光强参数是否大于第一设定光强阈值;
在第一光强参数大于第一设定光强阈值时,进行使镜头104以第二设定步长移动并利用成像装置102对对象进行图像采集,并判断成像装置102所采集到的图像的锐度值是否达到设定阈值的步骤。
在某些实施方式中,对焦模组106包括两个光传感器118,两个光传感器118用于接收由对象反射的光,第一光强参数为两个光传感器118接收到的光的光强的平均值。
在某些实施方式中,在对焦模组106接收到由对象反射的光时,控制装置用于:
使镜头104以小于第一设定步长且大于第二设定步长的第三设定步长对象移动,并根据对焦模组106接收到的光的光强计算出第一光强参数,判断第一光强参数是否大于第一设定光强阈值;
在第一光强参数大于第一设定光强阈值时,使镜头104以小于第三设定步长且大于第二设定步长的第四设定步长向对象移动,并根据对焦模组106接收到的光的光强计算出第二光强参数,判断第二光强参数是否小于第二设定光强阈值;
在第二光强参数小于第二设定光强阈值时,进行使镜头104以第二设定步长移动并利用成像装置102对对象进行图像采集,并判断成像装置102所采集到的图像的锐度值是否达到设定阈值的步骤。
在某些实施方式中,对焦模组106包括两个光传感器118,两个光传感器118用于接收由对象反射的光,第一光强参数为两个光传感器118接收到的光的光强的平均值,两个光传感器118接收到的光的光强具有第一差值,第二光强参数为第一差值与设定补偿值的差值。
在某些实施方式中,在使镜头104以第二设定步长移动时,控制装置用于:判断镜头104的当前位置所对应的图案的第一锐度值是否大于镜头104的前一位置所对应的图像的第二锐度值;
在第一锐度值大于第二锐度值且第一锐度值和第二锐度值之间的锐度差值大于设定差值时,使镜头104以第二设定步长继续向对象移动;
在第一锐度值大于第二锐度值且第一锐度值和第二锐度值之间的锐度差值小于设定差值时,使镜头104以小于第二设定步长的第五设定步长继续向对象移动以使成像装置102所采集到的图像的锐度值达到设定阈值;
在第二锐度值大于第一锐度值且第二锐度值和第一锐度值之间的锐度差值大于设定差值时,使镜头104以第二设定步长远离对象移动;
在第二锐度值大于第一锐度值且第二锐度值和第一锐度值之间的锐度差值小于设定差值时,使镜头104以第五设定步长远离对象移动以使成像装置102所采集到的图像的锐度值达到设定阈值。
在某些实施方式中,在镜头104移动时,控制装置用于:判断镜头104的当前位置是否超出第二设定位置;
在镜头104的当前位置超出第二设定位置时,停止移动镜头104或者进行对焦步骤。
本发明实施方式的一种计算机可读存储介质,用于存储供计算机执行的程序,执行程序包括完成上述任一实施方式的成像方法的步骤。计算机可读存储介质可以包括:只读存储器、随 机存储器、磁盘或光盘等。
本发明实施方式的一种计算机程序产品,包含指令,当所述指令被计算机执行时,所述指令使得所述计算机执行上述任一实施方式的成像方法的步骤。
在本说明书的描述中,参考术语“一个实施方式”、“某些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
此外,在本发明各个实施方式中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
尽管上面已经示出和描述了本发明的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (66)

  1. 一种成像方法,其特征在于,利用成像系统对对象进行成像,所述成像系统包括镜头,所述对象包括位于第一预设轨道不同位置的第一对象、第二对象和第三对象,所述成像方法包括:
    使所述镜头和所述第一预设轨道依据第一预定关系相对运动,以利用所述成像系统、无需对焦地获得所述第三对象的清晰图像,所述第一预定关系通过所述第一对象的焦面位置和所述第二对象的焦面位置确定。
  2. 如权利要求1所述的方法,其特征在于,所述第三对象位于所述第一对象和所述第二对象之间。
  3. 如权利要求1所述的方法,其特征在于,所述镜头是固定的,所述镜头包括光轴,所述第一预设轨道能够沿垂直于或平行于所述光轴方向运动。
  4. 如权利要求1所述的方法,其特征在于,所述第一预定关系的确定包括:
    利用所述成像系统对所述第一对象进行对焦,确定第一坐标;
    利用所述成像系统对所述第二对象进行对焦,确定第二坐标;
    依据所述第一坐标和所述第二坐标建立所述第一预定关系,所述第一坐标反映所述第一对象的焦面位置,所述第二坐标反映所述第二对象的焦面位置。
  5. 如权利要求1所述的方法,其特征在于,所述第一预设轨道为线性或非线性轨道;和/或所述第一预定关系为线性关系。
  6. 如权利要求1所述的方法,其特征在于,所述对象包括位于第二预设轨道不同位置的第四对象和第五对象,所述成像方法包括:
    使所述镜头和所述第二预设轨道依据第二预定关系相对运动,以利用所述成像系统、无需对焦地获得所述第五对象的清晰图像,所述第二预定关系通过所述第四对象的焦面位置和所述第一预定关系确定,所述第二预设轨道不同于所述第一预设轨道。
  7. 如权利要求6所述的方法,其特征在于,所述镜头是固定的,所述镜头包括光轴,所述第二预设轨道能够沿垂直于或平行于所述光轴方向运动。
  8. 如权利要求6所述的方法,其特征在于,所述第二预定关系的确定包括:
    利用所述成像系统对所述第四对象进行对焦,确定第四坐标;
    依据所述第一预定关系和所述第四坐标建立所述第二预定关系,所述第四坐标反映所述第四对象的焦面位置。
  9. 如权利要求6所述的方法,其特征在于,所述成像方法包括:在获取所述第三对象的清晰图像后,使所述镜头与所述第一预设轨道和/或所述第二预设轨道相对运动以利用所述成像系统、无需对焦地获取所述第五对象的清晰图像。
  10. 如权利要求4或8所述的方法,其特征在于,所述成像系统包括成像装置和载台,所述成像装置包括所述镜头和对焦模组,所述镜头包括光轴,所述镜头能够沿所述光轴方向运动以进行所述对焦,所述第一预设轨道和/或所述第二预设轨道位于所述载台上。
  11. 如权利要求10所述的方法,其特征在于,所述对焦包括以下步骤:
    (a)利用所述对焦模组发射光至所述对象上;
    (b)使所述镜头移动到第一设定位置;
    (c)使所述镜头从所述第一设定位置以第一设定步长向所述对象移动并判断所述对焦模组是否接收到所述对象反射的光;
    (d)在所述对焦模组接收到所述对象反射的光时,将所述镜头从当前位置移动到第二设定位置,所述第二设定位置位于第一范围内,所述第一范围是包括所述当前位置的、允许所述镜头沿所述光轴方向移动的一个范围;
    (e)使所述镜头从所述第二设定位置以第二设定步长移动,在每步位置利用所述成像装置获得所述对象的图像,所述第二设定步长小于所述第一设定步长;
    (f)对所述对象的图像进行评估,依据获得的图像评估结果,实现对焦。
  12. 如权利要求10所述的方法,其特征在于,以所述当前位置为基准,所述第一范围包括相对的第一区间和第二区间,定义所述第二区间更靠近所述对象,步骤(e)包括:
    (i)当所述第二设定位置位于所述第二区间时,将所述镜头从所述第二设定位置向远离所述对象的方向移动,在每步位置利用所述成像装置获得所述对象的图像;或者
    (ii)当所述第二设定位置位于所述第一区间时,将所述镜头从所述第二设定位置向靠近所述对象的方向移动,在每步位置利用所述成像装置获得所述对象的图像。
  13. 如权利要求12所述的方法,其特征在于,步骤(f)包括:比较所述图像评估结果与预设条件,若所述图像评估结果满足所述预设条件,保存与所述图像对应的镜头的位置;
    若所述图像评估结果不满足所述预设条件,将所述镜头移动至第三设定位置,所述第三设定位置位于所述第一范围中的不同于所述第二设定位置所在区间的另一区间。
  14. 如权利要求13所述的方法,其特征在于,所述图像评估结果包括第一评估值和第二评估值,所述第二设定步长包括粗步长和细步长,步骤(f)包括:所述镜头以所述粗步长移动直至相应位置的图像的第一评估值不大于第一阈值,所述镜头换以所述细步长继续移动至相应位置的图像的第二评估值为最大,并保存与所述第二评估值为最大时的图像对应的镜头的位置。
  15. 如权利要求13所述的方法,其特征在于,所述图像评估结果包括第一评估值、第二评估值和第三评估值,所述图像包括多个像素;
    所述预设条件为,所述图像上的亮斑的数量大于预设值,相应位置的图像的第一评估值不大于第一阈值,且所述相应位置的图像的第二评估值在所述相应位置的图像的前后各N个图像的第二评估值中是最大的;或
    所述预设条件为,所述图像上的亮斑的数量小于所述预设值,相应位置的图像的第一评估值不大于所述第一阈值,且相应位置的图像的第三评估值在所述当前图像的前后各N个图像的第三评估值中是最大的。
  16. 如权利要求15所述的方法,其特征在于,利用以下方法检测所述图像上的亮斑:
    利用k1*k2矩阵对所述图像进行亮斑检测,包括判定所述矩阵的中心像素值不小于所述矩阵非中心任一像素值的矩阵对应一个亮斑,k1和k2均为大于1的奇数,k1*k2矩阵包含k1*k2个像素点。
  17. 如权利要求16所述的方法,其特征在于,所述对应一个亮斑的矩阵的中心像素值大于第一预设值,所述矩阵非中心任一像素值大于第二预设值,所述第一预设值和所述第二预设值 与所述图像的平均像素值相关。
  18. 如权利要求17所述的方法,其特征在于,所述第一评估值通过统计所述图像的亮斑对应的连通域大小而确定的,一个所述图像的亮斑对应的连通域大小Area=A*B,A表示以所述亮斑对应的矩阵的中心为中心的所在行的连通域大小,B表示以所述亮斑对应的矩阵的中心为中心的所在列的连通域大小,定义大于所述图像的平均像素值的相连像素点为一个连通域。
  19. 如权利要求16-18任一项所述的方法,其特征在于,所述第二评估值和/或所述第三评估值通过统计所述图像的亮斑的分值而确定,一个所述图像的亮斑的分值Score=((k1*k2-1)CV-EV)/((CV+EV)/(k1*k2)),CV表示所述亮斑对应的矩阵的中心像素值,EV表示所述亮斑对应的所述矩阵的非中心像素值的总和。
  20. 如权利要求11所述的方法,其特征在于,所述对焦模组包括光源和光传感器,所述光源用于发射所述光到所述对象上,所述光传感器用于接收所述对象反射的光。
  21. 如权利要求11所述的方法,其特征在于,在所述对焦模组接收到所述对象反射的光时,所述对焦还包括步骤:
    使所述镜头以小于所述第一设定步长且大于所述第二设定步长的第三设定步长向所述对象移动,并根据所述对焦模组接收到的所述光的光强计算出第一光强参数,判断所述第一光强参数是否大于第一设定光强阈值;
    在所述第一光强参数大于所述第一设定光强阈值时,将所述镜头从当前位置移动到所述第二设定位置。
  22. 如权利要求21所述的方法,其特征在于,所述对焦模组包括两个光传感器,所述两个光传感器用于接收所述对象反射的光,所述第一光强参数为所述两个光传感器接收到的光的光强的平均值。
  23. 如权利要求11-22任一项所述的方法,其特征在于,在所述镜头移动时,判断所述镜头的当前位置是否超出第四设定位置;
    在所述镜头的当前位置超出所述第四设定位置时,停止移动所述镜头。
  24. 如权利要求10所述的方法,其特征在于,所述对焦包括以下步骤:
    利用所述对焦模组发射光至所述对象上;
    使所述镜头移动到第一设定位置;
    使所述镜头从所述第一设定位置以第一设定步长向所述对象移动并判断所述对焦模组是否接收到由所述对象反射的光;
    在所述对焦模组接收到由所述对象反射的光时,使所述镜头以小于所述第一设定步长的第二设定步长移动并利用所述成像装置对所述对像进行图像采集,并判断所述成像装置所采集到的所述图像的锐度值是否达到设定阈值;
    在所述图像的锐度值达到所述设定阈值时,保存所述镜头的当前位置作为保存位置。
  25. 如权利要求24所述的方法,其特征在于,所述对焦模组包括光源和光传感器,所述光源用于发射光到所述对象上,所述光传感器用于接收由所述对象反射的光。
  26. 如权利要求24所述的方法,其特征在于,在所述对焦模组接收到由所述对象反射的光时,所述对焦还包括步骤:
    使所述镜头以小于所述第一设定步长且大于所述第二设定步长的第三设定步长所述对象 移动,并根据所述对焦模组接收到的所述光的光强计算出第一光强参数,判断所述第一光强参数是否大于第一设定光强阈值;
    在所述第一光强参数大于所述第一设定光强阈值时,进行使所述镜头以所述第二设定步长移动并利用所述成像装置对所述对象进行图像采集,并判断所述成像装置所采集到的所述图像的锐度值是否达到设定阈值的步骤。
  27. 如权利要求26所述的方法,其特征在于,所述对焦模组包括两个光传感器,所述两个光传感器用于接收由所述对象反射的光,所述第一光强参数为所述两个光传感器接收到的光的光强的平均值。
  28. 如权利要求24所述的方法,其特征在于,在所述对焦模组接收到由所述对象反射的光时,所述对焦还包括以下步骤:
    使所述镜头以小于所述第一设定步长且大于所述第二设定步长的第三设定步长所述对象移动,并根据所述对焦模组接收到的所述光的光强计算出第一光强参数,判断所述第一光强参数是否大于第一设定光强阈值;
    在所述第一光强参数大于所述第一设定光强阈值时,使所述镜头以小于所述第三设定步长且大于所述第二设定步长的第四设定步长向所述对象移动,并根据所述对焦模组接收到的所述光的光强计算出第二光强参数,判断所述第二光强参数是否小于第二设定光强阈值;
    在所述第二光强参数小于所述第二设定光强阈值时,进行使所述镜头以所述第二设定步长移动并利用所述成像装置对所述对象进行图像采集,并判断所述成像装置所采集到的所述图像的锐度值是否达到设定阈值的步骤。
  29. 如权利要求28所述的方法,其特征在于,所述对焦模组包括两个光传感器,所述两个光传感器用于接收由所述对象反射的光,所述第一光强参数为所述两个光传感器接收到的光的光强的平均值,所述两个光传感器接收到的光的光强具有第一差值,所述第二光强参数为所述第一差值与设定补偿值的差值。
  30. 如权利要求24-29任一项所述的方法,其特征在于,在使所述镜头以所述第二设定步长移动时,判断所述镜头的当前位置所对应的所述图案的第一锐度值是否大于所述镜头的前一位置所对应的所述图像的第二锐度值;
    在所述第一锐度值大于所述第二锐度值且所述第一锐度值和所述第二锐度值之间的锐度差值大于设定差值时,使所述镜头以所述第二设定步长继续向所述对象移动;
    在所述第一锐度值大于所述第二锐度值且所述第一锐度值和所述第二锐度值之间的锐度差值小于所述设定差值时,使所述镜头以小于所述第二设定步长的第五设定步长继续向所述对象移动以使所述成像装置所采集到的所述图像的锐度值达到所述设定阈值;
    在所述第二锐度值大于所述第一锐度值且所述第二锐度值和所述第一锐度值之间的锐度差值大于所述设定差值时,使所述镜头以所述第二设定步长远离所述对象移动;
    在所述第二锐度值大于所述第一锐度值且所述第二锐度值和所述第一锐度值之间的锐度差值小于所述设定差值时,使所述镜头以所述第五设定步长远离所述对象移动以使所述成像装置所采集到的所述图像的锐度值达到所述设定阈值。
  31. 如权利要求24-30任一项所述的方法,其特征在于,在所述镜头移动时,判断所述镜头的当前位置是否超出第二设定位置;
    在所述镜头的当前位置超出所述第二设定位置时,停止移动所述镜头或者进行所述对焦步骤。
  32. 一种成像系统,其特征在于,对对象进行成像,所述成像系统包括镜头和控制装置,所述对象包括位于第一预设轨道不同位置的第一对象、第二对象和第三对象,所述控制装置用于:
    使所述镜头和所述第一预设轨道依据第一预定关系相对运动,以利用所述成像系统、无需对焦地获得所述第三对象的清晰图像,所述第一预定关系通过所述第一对象的焦面位置和所述第二对象的焦面位置确定。
  33. 如权利要求32所述的系统,其特征在于,所述第三对象位于所述第一对象和所述第二对象之间。
  34. 如权利要求32所述的系统,其特征在于,所述镜头是固定的,所述镜头包括光轴,所述第一预设轨道能够沿垂直于或平行于所述光轴方向运动。
  35. 如权利要求32所述的系统,其特征在于,所述第一预定关系的确定包括:
    利用所述成像系统对所述第一对象进行对焦,确定第一坐标;
    利用所述成像系统对所述第二对象进行对焦,确定第二坐标;
    依据所述第一坐标和所述第二坐标建立所述第一预定关系,所述第一坐标反映所述第一对象的焦面位置,所述第二坐标反映所述第二对象的焦面位置。
  36. 如权利要求32所述的系统,其特征在于,所述第一预设轨道为线性或非线性轨道;和/或所述第一预定关系为线性关系。
  37. 如权利要求32所述的系统,其特征在于,所述对象包括位于第二预设轨道不同位置的第四对象和第五对象,所述控制装置用于:
    使所述镜头和所述第二预设轨道依据第二预定关系相对运动,以利用所述成像系统、无需对焦地获得所述第五对象的图像,所述第二预定关系通过所述第四对象的焦面位置和所述第一预定关系确定,所述第二预设轨道不同于所述第一预设轨道。
  38. 如权利要求37所述的系统,其特征在于,所述镜头是固定的,所述镜头包括光轴,所述第二预设轨道能够沿垂直于或平行于所述光轴方向运动。
  39. 如权利要求37所述的系统,其特征在于,所述第二预定关系的确定包括:
    利用所述成像系统对所述第四对象进行对焦,确定第四坐标;
    依据所述第一预定关系和所述第四坐标建立所述第二预定关系,所述第四坐标反映所述第四对象的焦面位置。
  40. 如权利要求37所述的系统,其特征在于,所述控制装置用于:在获取所述第三对象的清晰图像后,使所述镜头与所述第一预设轨道和/或所述第二预设轨道相对运动以利用所述成像系统、无需对焦地获取所述第五对象的图像。
  41. 如权利要求35或39所述的系统,其特征在于,所述成像系统包括成像装置和载台,所述成像装置包括所述镜头和对焦模组,所述镜头包括光轴,所述镜头能够沿所述光轴方向运动,所述第一预设轨道和/或所述第二预设轨道位于所述载台上。
  42. 如权利要求41所述的系统,其特征在于,所述控制装置用于执行以下步骤:
    (a)利用所述对焦模组发射光至所述对象上;
    (b)使所述镜头移动到第一设定位置;
    (c)使所述镜头从所述第一设定位置以第一设定步长向所述对象移动并判断所述对焦模组是否接收到所述对象反射的光;
    (d)在所述对焦模组接收到所述对象反射的光时,将所述镜头从当前位置移动到第二设定位置,所述第二设定位置位于第一范围内,所述第一范围是包括所述当前位置的、允许所述镜头沿所述光轴方向移动的一个范围;
    (e)使所述镜头从所述第二设定位置以第二设定步长移动,在每步位置利用所述成像装置获得所述对象的图像,所述第二设定步长小于所述第一设定步长;
    (f)对所述对象的图像进行评估,依据获得的图像评估结果,实现对焦。
  43. 如权利要求41所述的系统,其特征在于,以所述当前位置为基准,所述第一范围包括相对的第一区间和第二区间,定义所述第二区间更靠近所述对象,步骤(e)包括:
    (i)当所述第二设定位置位于所述第二区间时,将所述镜头从所述第二设定位置向远离所述对象的方向移动,在每步位置利用所述成像装置获得所述对象的图像;或者
    (ii)当所述第二设定位置位于所述第一区间时,将所述镜头从所述第二设定位置向靠近所述对象的方向移动,在每步位置利用所述成像装置获得所述对象的图像。
  44. 如权利要求43所述的系统,其特征在于,步骤(f)包括:比较所述图像评估结果与预设条件,若所述图像评估结果满足所述预设条件,保存与所述图像对应的镜头的位置;
    若所述图像评估结果不满足所述预设条件,将所述镜头移动至第三设定位置,所述第三设定位置位于所述第一范围中的不同于所述第二设定位置所在区间的另一区间。
  45. 如权利要求44所述的系统,其特征在于,所述图像评估结果包括第一评估值和第二评估值,所述第二设定步长包括粗步长和细步长,步骤(f)包括:所述镜头以所述粗步长移动直至相应位置的图像的第一评估值不大于第一阈值,所述镜头换以所述细步长继续移动至相应位置的图像的第二评估值为最大,并保存与所述第二评估值为最大时的图像对应的镜头的位置。
  46. 如权利要求44所述的系统,其特征在于,所述图像评估结果包括第一评估值、第二评估值和第三评估值,所述图像包括多个像素;
    所述预设条件为,所述图像上的亮斑的数量大于预设值,相应位置的图像的第一评估值不大于第一阈值,且所述相应位置的图像的第二评估值在所述相应位置的图像的前后各N个图像的第二评估值中是最大的;或
    所述预设条件为,所述图像上的亮斑的数量小于所述预设值,相应位置的图像的第一评估值不大于所述第一阈值,且相应位置的图像的第三评估值在所述当前图像的前后各N个图像的第三评估值中是最大的。
  47. 如权利要求46所述的系统,其特征在于,所述成像系统包括亮斑检测模块,所述亮斑检测模块用于:
    利用k1*k2矩阵对所述图像进行亮斑检测,包括判定所述矩阵的中心像素值不小于所述矩阵非中心任一像素值的矩阵对应一个亮斑,k1和k2均为大于1的奇数,k1*k2矩阵包含k1*k2个像素点。
  48. 如权利要求47所述的系统,其特征在于,所述对应一个亮斑的矩阵的中心像素值大于 第一预设值,所述矩阵非中心任一像素值大于第二预设值,所述第一预设值和所述第二预设值与所述图像的平均像素值相关。
  49. 如权利要求48所述的系统,其特征在于,所述第一评估值通过统计所述图像的亮斑对应的连通域大小而确定的,一个所述图像的亮斑对应的连通域大小Area=A*B,A表示以所述亮斑对应的矩阵的中心为中心的所在行的连通域大小,B表示以所述亮斑对应的矩阵的中心为中心的所在列的连通域大小,定义大于所述图像的平均像素值的相连像素点为一个连通域。
  50. 如权利要求47所述的系统,其特征在于,所述第二评估值和/或所述第三评估值通过统计所述图像的亮斑的分值而确定,一个所述图像的亮斑的分值Score=((k1*k2-1)CV-EV)/((CV+EV)/(k1*k2)),CV表示所述亮斑对应的矩阵的中心像素值,EV表示所述亮斑对应的所述矩阵的非中心像素值的总和。
  51. 如权利要求42所述的系统,其特征在于,所述对焦模组包括光源和光传感器,所述光源用于发射所述光到所述对象上,所述光传感器用于接收所述对象反射的光。
  52. 如权利要求42所述的系统,其特征在于,在所述对焦模组接收到所述对象反射的光时,所述控制装置还用于:
    使所述镜头以小于所述第一设定步长且大于所述第二设定步长的第三设定步长向所述对象移动,并根据所述对焦模组接收到的所述光的光强计算出第一光强参数,判断所述第一光强参数是否大于第一设定光强阈值;
    在所述第一光强参数大于所述第一设定光强阈值时,将所述镜头从当前位置移动到所述第二设定位置。
  53. 如权利要求52所述的系统,其特征在于,所述对焦模组包括两个光传感器,所述两个光传感器用于接收所述对象反射的光,所述第一光强参数为所述两个光传感器接收到的光的光强的平均值。
  54. 如权利要求42-53任一项所述的系统,其特征在于,在所述镜头移动时,所述控制装置用于:判断所述镜头的当前位置是否超出第四设定位置;
    在所述镜头的当前位置超出所述第四设定位置时,停止移动所述镜头。
  55. 如权利要求41所述的系统,其特征在于,所述控制装置用于:
    利用所述对焦模组发射光至所述对象上;
    使所述镜头移动到第一设定位置;
    使所述镜头从所述第一设定位置以第一设定步长向所述对象移动并判断所述对焦模组是否接收到由所述对象反射的光;
    在所述对焦模组接收到由所述对象反射的光时,使所述镜头以小于所述第一设定步长的第二设定步长移动并利用所述成像装置对所述对像进行图像采集,并判断所述成像装置所采集到的所述图像的锐度值是否达到设定阈值;
    在所述图像的锐度值达到所述设定阈值时,保存所述镜头的当前位置作为保存位置。
  56. 如权利要求55所述的系统,其特征在于,所述对焦模组包括光源和光传感器,所述光源用于发射光到所述对象上,所述光传感器用于接收由所述对象反射的光。
  57. 如权利要求55所述的系统,其特征在于,在所述对焦模组接收到由所述对象反射的光时,所述控制装置用于:
    使所述镜头以小于所述第一设定步长且大于所述第二设定步长的第三设定步长所述对象移动,并根据所述对焦模组接收到的所述光的光强计算出第一光强参数,判断所述第一光强参数是否大于第一设定光强阈值;
    在所述第一光强参数大于所述第一设定光强阈值时,进行使所述镜头以所述第二设定步长移动并利用所述成像装置对所述对象进行图像采集,并判断所述成像装置所采集到的所述图像的锐度值是否达到设定阈值的步骤。
  58. 如权利要求57所述的系统,其特征在于,所述对焦模组包括两个光传感器,所述两个光传感器用于接收由所述对象反射的光,所述第一光强参数为所述两个光传感器接收到的光的光强的平均值。
  59. 如权利要求55所述的系统,其特征在于,在所述对焦模组接收到由所述对象反射的光时,所述控制装置用于:
    使所述镜头以小于所述第一设定步长且大于所述第二设定步长的第三设定步长所述对象移动,并根据所述对焦模组接收到的所述光的光强计算出第一光强参数,判断所述第一光强参数是否大于第一设定光强阈值;
    在所述第一光强参数大于所述第一设定光强阈值时,使所述镜头以小于所述第三设定步长且大于所述第二设定步长的第四设定步长向所述对象移动,并根据所述对焦模组接收到的所述光的光强计算出第二光强参数,判断所述第二光强参数是否小于第二设定光强阈值;
    在所述第二光强参数小于所述第二设定光强阈值时,进行使所述镜头以所述第二设定步长移动并利用所述成像装置对所述对象进行图像采集,并判断所述成像装置所采集到的所述图像的锐度值是否达到设定阈值的步骤。
  60. 如权利要求59所述的系统,其特征在于,所述对焦模组包括两个光传感器,所述两个光传感器用于接收由所述对象反射的光,所述第一光强参数为所述两个光传感器接收到的光的光强的平均值,所述两个光传感器接收到的光的光强具有第一差值,所述第二光强参数为所述第一差值与设定补偿值的差值。
  61. 如权利要求55-60任一项所述的系统,其特征在于,在使所述镜头以所述第二设定步长移动时,所述控制装置用于:判断所述镜头的当前位置所对应的所述图案的第一锐度值是否大于所述镜头的前一位置所对应的所述图像的第二锐度值;
    在所述第一锐度值大于所述第二锐度值且所述第一锐度值和所述第二锐度值之间的锐度差值大于设定差值时,使所述镜头以所述第二设定步长继续向所述对象移动;
    在所述第一锐度值大于所述第二锐度值且所述第一锐度值和所述第二锐度值之间的锐度差值小于所述设定差值时,使所述镜头以小于所述第二设定步长的第五设定步长继续向所述对象移动以使所述成像装置所采集到的所述图像的锐度值达到所述设定阈值;
    在所述第二锐度值大于所述第一锐度值且所述第二锐度值和所述第一锐度值之间的锐度差值大于所述设定差值时,使所述镜头以所述第二设定步长远离所述对象移动;
    在所述第二锐度值大于所述第一锐度值且所述第二锐度值和所述第一锐度值之间的锐度差值小于所述设定差值时,使所述镜头以所述第五设定步长远离所述对象移动以使所述成像装置所采集到的所述图像的锐度值达到所述设定阈值。
  62. 如权利要求55-61任一项所述的系统,其特征在于,在所述镜头移动时,所述控制装置 用于:判断所述镜头的当前位置是否超出第二设定位置;
    在所述镜头的当前位置超出所述第二设定位置时,停止移动所述镜头或者进行所述对焦步骤。
  63. 一种测序装置,其特征在于,包括权利要求32-62任一项所述的成像系统。
  64. 一种计算机可读存储介质,用于存储供计算机执行的程序,其特征在于,执行所述程序包括完成权利要求1-31任一项所述的方法的步骤。
  65. 一种成像系统,用于对对象进行成像,所述成像系统包括镜头和控制装置,所述对象包括位于第一预设轨道不同位置的第一对象、第二对象和第三对象,其特征在于,所述控制装置包括计算机可执行程序,执行所述计算机可执行程序包括完成权利要求1-31任一项所述的方法的步骤。
  66. 一种计算机程序产品,包含指令,其特征在于,当所述指令被计算机执行时,所述指令使得所述计算机执行权利要求1-31任一项所述的方法的步骤。
PCT/CN2019/097272 2018-07-23 2019-07-23 成像方法、装置及系统 WO2020020148A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP19841635.6A EP3829158A4 (en) 2018-07-23 2019-07-23 IMAGING PROCESS, DEVICE, AND SYSTEM
US17/262,663 US11368614B2 (en) 2018-07-23 2019-07-23 Imaging method, device and system
US17/746,838 US11575823B2 (en) 2018-07-23 2022-05-17 Imaging method, device and system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810813660.X 2018-07-23
CN201810814359.0 2018-07-23
CN201810813660.XA CN112291469A (zh) 2018-07-23 2018-07-23 成像方法、装置及系统
CN201810814359.0A CN112333378A (zh) 2018-07-23 2018-07-23 成像方法、装置及系统

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US17/262,663 A-371-Of-International US11368614B2 (en) 2018-07-23 2019-07-23 Imaging method, device and system
US17/746,838 Continuation US11575823B2 (en) 2018-07-23 2022-05-17 Imaging method, device and system

Publications (1)

Publication Number Publication Date
WO2020020148A1 true WO2020020148A1 (zh) 2020-01-30

Family

ID=69181310

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/097272 WO2020020148A1 (zh) 2018-07-23 2019-07-23 成像方法、装置及系统

Country Status (3)

Country Link
US (2) US11368614B2 (zh)
EP (1) EP3829158A4 (zh)
WO (1) WO2020020148A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845041B (zh) * 2021-12-30 2024-03-15 齐之明光电智能科技(苏州)有限公司 一种用于纳米颗粒成像的对焦方法、装置及存储介质
CN115308876A (zh) * 2022-08-04 2022-11-08 苏州深捷信息科技有限公司 基于参考焦平面的显微镜快速聚焦方法、设备、介质及产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7768571B2 (en) * 2004-03-22 2010-08-03 Angstrom, Inc. Optical tracking system using variable focal length lens
CN103323939A (zh) * 2012-03-20 2013-09-25 麦克奥迪实业集团有限公司 数字切片实时扫描自动聚焦系统及方法
US20170064184A1 (en) * 2015-08-24 2017-03-02 Lustrous Electro-Optic Co.,Ltd. Focusing system and method
CN106610553A (zh) * 2015-10-22 2017-05-03 深圳超多维光电子有限公司 一种自动对焦的方法及装置
CN107250873A (zh) * 2014-10-06 2017-10-13 徕卡显微系统(瑞士)股份公司 显微镜

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7518652B2 (en) 2000-05-03 2009-04-14 Aperio Technologies, Inc. Method and apparatus for pre-focus in a linear array based slide scanner
US7668362B2 (en) 2000-05-03 2010-02-23 Aperio Technologies, Inc. System and method for assessing virtual slide image quality
US7030351B2 (en) * 2003-11-24 2006-04-18 Mitutoyo Corporation Systems and methods for rapidly automatically focusing a machine vision inspection system
GB0503032D0 (en) 2005-02-14 2005-03-23 Fujifilm Electronic Imaging Blip focus
US8422127B2 (en) * 2005-03-17 2013-04-16 Hamamatsu Photonics K.K. Microscopic image capturing device
US8179432B2 (en) * 2007-04-30 2012-05-15 General Electric Company Predictive autofocusing
US20100157086A1 (en) * 2008-12-15 2010-06-24 Illumina, Inc Dynamic autofocus method and system for assay imager
AU2014292179B2 (en) * 2013-07-18 2017-12-07 Ventana Medical Systems, Inc. Auto-focus methods and systems for multi-spectral imaging
CN106375647B (zh) 2015-07-23 2020-05-29 杭州海康威视数字技术股份有限公司 一种摄像机后焦的调整方法、装置和系统
CN105827944B (zh) 2015-11-25 2019-05-17 维沃移动通信有限公司 一种对焦方法及移动终端
US10852290B2 (en) * 2016-05-11 2020-12-01 Bonraybio Co., Ltd. Analysis accuracy improvement in automated testing apparatus
JP2019520574A (ja) * 2016-06-21 2019-07-18 エスアールアイ インターナショナルSRI International ハイパースペクトルイメージング方法および装置
CN207215686U (zh) 2017-09-20 2018-04-10 深圳市瀚海基因生物科技有限公司 光学检测系统及序列测定系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7768571B2 (en) * 2004-03-22 2010-08-03 Angstrom, Inc. Optical tracking system using variable focal length lens
CN103323939A (zh) * 2012-03-20 2013-09-25 麦克奥迪实业集团有限公司 数字切片实时扫描自动聚焦系统及方法
CN107250873A (zh) * 2014-10-06 2017-10-13 徕卡显微系统(瑞士)股份公司 显微镜
US20170064184A1 (en) * 2015-08-24 2017-03-02 Lustrous Electro-Optic Co.,Ltd. Focusing system and method
CN106610553A (zh) * 2015-10-22 2017-05-03 深圳超多维光电子有限公司 一种自动对焦的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3829158A4

Also Published As

Publication number Publication date
US11575823B2 (en) 2023-02-07
EP3829158A4 (en) 2021-09-01
US20220279130A1 (en) 2022-09-01
US20210266469A1 (en) 2021-08-26
US11368614B2 (en) 2022-06-21
EP3829158A1 (en) 2021-06-02

Similar Documents

Publication Publication Date Title
US11575823B2 (en) Imaging method, device and system
US11156823B2 (en) Digital microscope apparatus, method of searching for in-focus position thereof, and program
CN108693625B (zh) 成像方法、装置及系统
EP3035104B1 (en) Microscope system and setting value calculation method
US20150358533A1 (en) Control method for imaging apparatus and imaging system
CN105026977A (zh) 信息处理装置、信息处理方法和信息处理程序
CN102122055A (zh) 一种激光式自动对焦装置及其对焦方法
US9851549B2 (en) Rapid autofocus method for stereo microscope
WO2019114760A1 (zh) 成像方法、装置及系统
WO2018188440A1 (zh) 成像方法、装置及系统
CN108693624B (zh) 成像方法、装置及系统
CN112333378A (zh) 成像方法、装置及系统
CN113366364A (zh) 载玻片扫描系统中的实时聚焦
CN112291469A (zh) 成像方法、装置及系统
JP2011209573A (ja) 合焦装置、合焦方法、合焦プログラム及び顕微鏡
CN111647506B (zh) 定位方法、定位装置和测序系统
CN108693113B (zh) 成像方法、装置及系统
JP2013174709A (ja) 顕微鏡装置およびバーチャル顕微鏡装置
JP2005284118A (ja) 自動焦点制御装置及び自動焦点制御方法
CN111363673B (zh) 定位方法、定位装置和测序系统
KR101873318B1 (ko) 세포 이미징 장치 및 그 방법
US20220317429A1 (en) Microscope and method for imaging an object using a microscope
CN117891042A (zh) 一种对焦方法、光学成像系统、测序系统及介质
JP2013167816A (ja) 撮像装置、撮像制御プログラム及び撮像方法
JP2014235408A (ja) 顕微鏡装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19841635

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019841635

Country of ref document: EP

Effective date: 20210223