CN112291469A - Imaging method, device and system - Google Patents

Imaging method, device and system Download PDF

Info

Publication number
CN112291469A
CN112291469A CN201810813660.XA CN201810813660A CN112291469A CN 112291469 A CN112291469 A CN 112291469A CN 201810813660 A CN201810813660 A CN 201810813660A CN 112291469 A CN112291469 A CN 112291469A
Authority
CN
China
Prior art keywords
lens
value
image
light
light intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810813660.XA
Other languages
Chinese (zh)
Inventor
李林森
孙瑞涛
徐家宏
周志良
姜泽飞
颜钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Genemind Biosciences Co Ltd
Original Assignee
Genemind Biosciences Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genemind Biosciences Co Ltd filed Critical Genemind Biosciences Co Ltd
Priority to CN201810813660.XA priority Critical patent/CN112291469A/en
Priority to PCT/CN2019/097272 priority patent/WO2020020148A1/en
Priority to EP19841635.6A priority patent/EP3829158A4/en
Priority to US17/262,663 priority patent/US11368614B2/en
Publication of CN112291469A publication Critical patent/CN112291469A/en
Priority to US17/746,838 priority patent/US11575823B2/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals

Abstract

The invention discloses an imaging method and an imaging system, wherein an object is imaged by the imaging system, the imaging system comprises a lens, the object comprises a plurality of first objects and second objects, and the imaging method comprises the following steps: moving the lens and/or the plurality of first objects, and sequentially focusing the plurality of first objects by using the imaging system to obtain a plurality of coordinates, wherein the plurality of coordinates respectively correspond to focal plane positions of the plurality of first objects; establishing a relationship according to the plurality of coordinates; moving, with the imaging system, the lens and/or the second object based on the relationship to obtain a sharp image of the second object without focus. The imaging method is high in imaging efficiency, and can still realize quick focusing according to the relation under the condition that the focus tracking fails, so that the condition that the shot image is fuzzy due to the fact that the focus is out of focus is avoided.

Description

Imaging method, device and system
Technical Field
The present invention relates to the field of optical imaging technologies, and in particular, to an imaging method, an imaging device, and an imaging system.
Background
In the related art, when an imaging device, such as a camera, takes multiple shots of multiple objects or a moving object, the focal length is rapidly adjusted to obtain the clearest focal plane, so as to obtain a clear picture, which is called as focus tracking.
However, in practical applications, when a camera is used for shooting, some external interference is easy to occur, for example, when the camera is used for shooting, the camera fails to focus due to the existence of other veiling glare or dust or scratches on the surface of the object, and when the camera fails to focus, if the camera cannot focus again, the image is blurred. For example, when a camera is used for sequence determination, if the target is a nucleic acid molecule located in a chip, bubbles are contained in the liquid inside the imaging chip, and large masses of fluorescent impurities, dust on the surface of the chip, scratches, and the like are likely to cause failure in focusing of the camera. For another example, when the camera is applied to the security field, continuous imaging can be rapidly performed on a changing environment, and monitoring is facilitated.
Disclosure of Invention
In a sequencing platform for obtaining nucleic acid information based on imaging, such as the second generation or third generation sequencing platform for obtaining nucleic acid information by photographing currently on the market, a process of photographing nucleic acid placed in a reactor by using an imaging system is included.
As is common, a reactor is also referred to as a chip (Flowcell) which may contain one or more parallel channels for the ingress and egress of reagents to form the environment required for the sequencing reaction. The chip can be formed by bonding two pieces of glass, the sequencing process comprises the steps that a camera carries out multiple rounds of photographing on a fixed area of the chip, the area photographed each time can be called Fov (field of view), each round of photographing can be called a cycle, and the two cycles are subjected to chemical reaction by introducing reagents again.
In the normal photographing process, the camera can mostly and automatically focus on successfully, namely, the clearest focal plane position is found. When interference occurs, the tracking may fail.
Fig. 1-3 illustrate data for successful focus tracking and abnormal or failed focus tracking as occurred in the inventors' experiments.
Taking two rows of FOVs of the same cycle as an example of continuous shooting, the height (Z value) coordinates of the objective lens during shooting are recorded, as shown in fig. 1, the abscissa is a serial number of Fov, the first half Fov is shot sequentially from the left side to the right side of the Flowcell, and the second half Fov is shot from right side to left side after line feed. The ordinate is the height of the microscope objective from the camera, namely the Z value, the unit is um, the negative value indicates that the microscope objective is positioned below the camera, and the larger the absolute value of the Z value is, the farther the objective is from the camera is.
Fig. 1 shows Z-value curves corresponding to 300 Fov images captured successfully in focus, and fig. 2 shows Z-value curves corresponding to a partial focus error (reflected as a partial Z-value error) included in 200 Fov images captured successfully, in which case the abnormal portion of the curve, i.e., the image corresponding to the portion appearing as a convex portion, is an unclear/blurred image.
Because the camera has a certain limitation in tracking, the camera is easy to be out of focus after encountering interference, and the object lens is far away from the focal plane after being out of focus, namely the distance between the object lens and the focal plane position of the object lens Fov in the subsequent tracking shooting is too large, so that the object lens cannot return to the focal plane even if the interference is eliminated, which is shown in fig. 3. The first 1-200 Fov in fig. 3 belong to one cycle and the last Fov belongs to another cycle. Fig. 3 shows that after 268 th Fov (in the first row of another cycle), the focus chase failed, and after the disturbance disappeared, the re-focus was not successful until the cycle ended.
Failure to catch up means that the image is blurred, which can lead to loss of information. This is therefore a problem that must be solved. In reality, interference cannot be completely eliminated, but generally, it is desirable that a clear image can be acquired at least after the interference disappears.
The inventor analyzes a large amount of data of successful focus tracking and abnormal focus tracking, and finds that a plurality of Z value curves corresponding to a plurality of Fov with the same normal focus tracking in different cycles (namely different time periods) show a certain rule under the condition that the objective lens is fixed. As shown in fig. 4, the Z value curves corresponding to 300 Fov obtained clear pictures in normal focus tracking of 4 different cycles are shown.
The inventors have found two laws:
1) the same location (Fov) may have different focal planes at different cycles, but the relative position of the focal planes changes substantially relative to the other Fov of the same cycle. That is, in physical location, there is correlation between the focal planes of different Fov in the same cycle.
2) Half of 300 Fov of each curve on the graph is shot from the left side to the right side of one line of the Flowcell, and the other half is shot from the right side to the left side after line change, because of the deformation of the Flowcell and/or the height difference of the left side and the right side, focal planes of a plurality of Fov which are continuous in the same direction (for example, from left to right or from right to left) of the same line are in a certain rule, and can be better fit into a straight line.
Presenting the above rules, the inventors guess the possible reasons to include: since the same Fov needs to be photographed repeatedly in different cycles, the pressure inside the chip changes after heating and reagent circulation, and the focal plane shifts integrally. While Fov were small relative to the entire chip, the surface flatness of each Fov was seen to be constant, as indicated by the relative focal plane position between adjacent Fov remaining constant.
Based on the above-mentioned discovered rules, the inventor develops a set of algorithms, and can enable the camera to have the focal plane prediction function through the assistance of software algorithms under the condition of not replacing hardware. Specifically, for example, in cycle1, for a plurality of Fov located in the same preset track (the first preset track, for example, the same row), the focal planes of two of them Fov may be acquired in focus, the difference between the focal planes of the two is calculated, and the relationship is obtained by linear fitting, and the relationship is used to predict the focal plane positions of other Fov in the row. For the cycle2 and the following cycles, the focus surface of the Fov of the current cycle is determined by memorizing the focus surface of the 1 cycle or any Fov of any previous cycle which is focused normally, and then focusing is carried out, so that the focus surface of any other Fov of the current cycle can be predicted by establishing a relationship through linear regression.
The relationship is established using linear regression, expressed as formula (a) y ═ kx + b, and the slope k (which may also be referred to as the trend k) and intercept b (which may also be referred to as the base offset b) need to be determined. Based on the above rule 1), k is 1, so the formula (a) can be converted into the formula (b) y is x + b, and b can be determined based on the relative positions of any two Fov focal planes on the same track of the same cycle and the Z value.
For example, for cycle1, its base offset b may be calculated by the overall focal plane difference (e.g., from one end of the track to the other end of the track), specifically, focus obtains the focal plane position Z values of cycle 1fov Z (r) and cycle 1fov Z (l) that respectively represent two objects (which may be referred to as two positions or two Fov) at one end and the other end of one track in cycle1, intercept b may be calculated as (cycle 1fov Z (r) -cycle 1fov Z (l))/fov num that represents the number of Fov between the two positions of cycle 1fov Z (r) and cycle 1fov Z (l), equation b) may be used to predict the cycle 1fov Z (n +1) in cycle1, cycle1 vc (n) and cycle1 vc (n +1) in b) may represent the adjacent two focus positions (foc 1) and focus 1 (c +1) may be obtained by obtaining the relative focus position (Fov).
Note that b can be determined by the focal plane information of two Fov on the same track. Cyc1fov z (n +1) may also be determined using the determined formula (b) and the focal plane coordinate information of any of the focused Fov, where, for example, cyc1fov z (r) and cyc1fov z (l) are determined using the determined relationship (b) and any of the determined values.
After the linear relationship of a certain cycle is determined, for any cycle of the same track/same Fov of the subsequent shooting, the focal plane position of any Fov of the current cycle can be predicted based on the determined linear relationship and the focal plane position of any Fov of the current cycle. For example, using the focal plane position of fov (N) (Fov of nth or nth position) in the current cycle to predict the focal plane position of Fov (N +1) (Fov of nth +1 or nth +1 position) in the same cycle, we can bring the Z value currfov (N) of nth Fov as a dependent variable into formula (b), and the obtained y is currfov (N + 1).
After the linear relationship of a certain cycle is determined, for any cycle of the same track/same Fov in the subsequent shooting, the focal plane positions of two Fov in the cycle and the focal plane position of one of the same Fov of the current cycle may be determined based on the determined linear relationship, so as to predict the focal plane position of the other of the same Fov of the current cycle. For example, in the previous cycle, equation (b) is determined, the focal plane position of Fov (N +1) (Fov of the N +1 th or N +1 th position) in the same cycle is predicted using the focal plane position of fov (N) (Fov of the nth or nth position) in the current cycle, we can determine the focal plane positions of fov (N) and Fov (N +1) in the previous cycle, denoted preFovZ (N) and preFovZ (N +1), respectively, using equation (c) currfov (N +1) currfov (N) + z (N) + (+ 1) -preFovZ (N)), to determine currfov (N + 1).
It should be noted that the relationship between the above rule discovery and explanation and the schematic established relationship is a linear relationship only for convenience of description or understanding, and those skilled in the art can understand that the trajectory of the first object may be a straight line or a curve, and any curve may be regarded as a fit of a plurality of line segments. In this regard, it is believed that by the above-described exemplary illustration of the discovery of rules and the establishment of relationships for the relevant scenarios for making the present invention, for the case where the trajectory of the first object is a curve, one skilled in the art can follow the concepts of the present invention to consider the curve-type trajectory as a set of line segments, and correspondingly, a set of linear relationships can be established to achieve the focus-free prediction of the focal plane position of the object on the trajectory.
In addition, the inventor thinks that the application scenarios similar or possibly similar to the above scenarios, such as panoramic scanning or security monitoring systems, need to continuously image the object within a certain range at present or in the future, and based on the above findings, the target position/object in the range can be focused and the focal plane can be recorded first, so that the image of the object in the range at the next moment can be obtained without focusing.
In the case of not replacing the hardware, the camera can be returned to the vicinity of the focal plane again by the imaging method of the present embodiment, and photographing can be started again. Based on the above findings and explanatory illustrations, the present invention provides an imaging method, an imaging apparatus, an imaging system, and a sequencing system.
An imaging method according to an embodiment of the present invention is an imaging method for imaging an object by using an imaging system including a lens, the object including a plurality of first objects and second objects, the imaging method including: moving the lens and/or the plurality of first objects, and sequentially focusing the plurality of first objects by using an imaging system to obtain a plurality of coordinates, wherein the plurality of coordinates respectively correspond to focal plane positions of the plurality of first objects; establishing a relationship according to the plurality of coordinates; with the imaging system, the lens and/or the second object are moved based on the relationship to obtain a sharp image of the second object without focusing.
An imaging system of an embodiment of the present invention images an object, the imaging system including a lens and a control device, the object including a plurality of first objects and second objects, the control device being configured to:
moving the lens and/or the plurality of first objects, and sequentially focusing the plurality of first objects by using an imaging system to obtain a plurality of coordinates, wherein the plurality of coordinates respectively correspond to focal plane positions of the plurality of first objects;
establishing a relationship according to the plurality of coordinates;
with the imaging system, the lens and/or the second object are moved based on the relationship to obtain a sharp image of the second object without focusing.
In the imaging method and the imaging system, the relation is determined by focusing the plurality of first objects, when a second object is imaged, focal plane prediction can be directly carried out according to the relation, the image of the second object is acquired without focusing, and the method is particularly suitable for the situation that the number of the objects is large and the images of the objects are expected to be acquired quickly and continuously.
The sequencing device comprises the imaging system.
A computer-readable storage medium of an embodiment of the present invention stores a program for execution by a computer, where executing the program includes performing the steps of the method of the above embodiment. The computer-readable storage medium may include: read-only memory, random access memory, magnetic or optical disk, and the like.
An imaging system of an embodiment of the present invention for imaging an object, the imaging system comprising a lens and a control device, the object comprising a plurality of first objects and second objects, the control device comprising a computer executable program, executing the computer executable program comprising the steps of performing the method of the above embodiment.
A computer program product of the embodiment of the present invention contains instructions, which when executed by a computer, cause the computer to execute the steps of the method of the above embodiment.
Additional aspects and advantages of embodiments of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the invention.
Drawings
The above and/or additional aspects and advantages of embodiments of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a Z-value graph showing the success of focus tracking in sequence measurement.
Fig. 2 is a graph showing a Z value corresponding to the case where focus tracking failure occurs in the abnormal convex portion Fov in the sequence measurement.
Fig. 3 is a Z-value graph showing that focusing failed in the sequence measurement and that focusing could not be resumed successfully even at the end of the cycle photographing after the interference disappeared.
Fig. 4 is a schematic diagram showing different focusing positions formed by the focusing data of the object in the sequence measurement.
Fig. 5 is a flowchart of an imaging method of an embodiment of the invention.
Fig. 6 is a schematic view of an application scenario of the imaging method according to the embodiment of the present invention.
Fig. 7 is a schematic view of another application scenario of the imaging method according to the embodiment of the present invention.
Fig. 8 is a schematic view of another application scenario of the imaging method according to the embodiment of the present invention.
Fig. 9 is a schematic diagram of a focused position formed by the focused data of the object without interference at the time of the sequence measurement.
Fig. 10 is a schematic view of a focused position formed by the focused data of the subject when the re-focusing is successful with interference at the time of the sequence determination.
Fig. 11 is a schematic view of a focused position formed by the focused data of the subject when the re-focusing is not possible with disturbance in the sequence measurement.
Fig. 12 is a flowchart illustrating a focusing method according to an embodiment of the invention.
Fig. 13 is a schematic diagram of a positional relationship between a lens and an object according to an embodiment of the present invention.
Fig. 14 is a partial structural schematic diagram of an imaging system of an embodiment of the present invention.
FIG. 15 is a schematic diagram of connected components of an image according to an embodiment of the invention.
Fig. 16 is another flowchart illustrating a focusing method according to an embodiment of the invention.
Fig. 17 is a flowchart illustrating a focusing method according to an embodiment of the invention.
FIG. 18 is a further flowchart illustrating a focusing method according to an embodiment of the present invention.
FIG. 19 is a further flowchart illustrating a focusing method according to an embodiment of the present invention.
FIG. 20 is a block schematic diagram of an imaging system of an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that "first", "second", "third", "fourth" and "fifth" are merely for convenience of description and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, unless otherwise explicitly specified or limited, "connected" is to be understood in a broad sense, e.g., fixedly, detachably or integrally connected; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The terms "center," "thickness," "upper," "lower," "front," "rear," and the like, as used herein, refer to an orientation or positional relationship that is based on the orientation or positional relationship shown in the detailed description or the drawings, which is for convenience and simplicity of description, and does not indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation.
The term "constant", as it relates to distance, object distance and/or relative position, may refer to a change in value, range or amount, either absolutely constant or relatively constant, which is said to remain within a certain deviation or predetermined acceptable range. "invariant" with respect to distance, object distance, and/or relative position is relatively invariant, unless otherwise specified.
The following disclosure provides many implementations or examples for implementing the inventive concepts. The present invention may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
The "sequencing" referred to in the embodiments of the present invention is similar to nucleic acid sequencing, including DNA sequencing and/or RNA sequencing, including long fragment sequencing and/or short fragment sequencing. The so-called "sequencing reaction" is the same as the sequencing reaction.
The embodiment of the invention provides an imaging method, which utilizes an imaging system to image an object. Referring to fig. 5-8, 14 and 15, the imaging system includes a lens 104, and the object includes a plurality of first objects and second objects. The imaging method comprises the following steps: step S101, moving a lens and/or a plurality of first objects, and sequentially focusing the plurality of first objects by using an imaging system to obtain a plurality of coordinates, wherein the plurality of coordinates respectively correspond to focal plane positions of the plurality of first objects; step S102, establishing a relation according to a plurality of coordinates; step S103, using the imaging system, moving the lens and/or the second object based on the relationship to obtain a sharp image of the second object without focusing.
In the imaging method, the relation is determined by focusing of the plurality of first objects, when a second object is imaged, focal plane prediction can be directly carried out according to the relation, the image of the second object is acquired without focusing, and the method is particularly suitable for the situation that the number of the objects is large and the images of the objects are expected to be acquired quickly and continuously.
In particular, the spatial position of the first object determines the coordinates of the first object when the first object is in focus, i.e. the first object comprises a position attribute, whereas the above relationship may be defined by a plurality of first objects. The distribution of the plurality of first objects in space may also define a spatial trajectory, the second object may be located on the trajectory of the plurality of first objects, and the trajectory of the plurality of first objects may be linear or non-linear. Non-linearities may include arcs, circles, waves, parabolic shapes, ovals, and the like.
In one example, referring to fig. 6, when the mobile device 400 with lens is used to perform a panoramic scan, the mobile device 400 is moved (e.g., moved substantially horizontally), so that when the mobile device 400 moves, a scene captured by the mobile device 400 can be taken as a plurality of first objects, a trajectory of the plurality of first objects is linear or curved, the mobile device 400 focuses on the plurality of first objects to obtain coordinates of the first objects, and the above relationship is established by using the coordinates (focal plane data) of the plurality of first objects. When the mobile device 400 is to capture a second object (other object) during movement, a clear image of the second object can be obtained without focusing based on the above-described relationship. In this example, it can be considered that the mobile device 400 moves the lens, and the first object and the second object are stationary.
In another example, referring to fig. 7, in the security field, the imaging system includes a lens holder 502, the lens 104 is rotatably mounted on the lens holder 502, the lens 104 includes a rotation axis R (the rotation axis R is shown perpendicular to the paper), the lens 104 can rotate along the rotation axis R, the scene scanned by the lens 104 can be taken as a plurality of first objects, and the trajectory of the plurality of first objects is substantially arc-shaped (non-linear). The lens 104 focuses the plurality of first objects to acquire coordinates of the first objects, and the above relationship is established using the coordinates (focal plane data) of the plurality of first objects. When the lens 104 is to photograph a second object (other object) during rotation, a clear image of the second object can be obtained without focusing on the basis of the above-described relationship. In this example, it can be considered that the lens 104 moves and the first object and the second object are stationary.
In another example, referring to fig. 8, 14 and 15, in a sequencing platform for acquiring nucleic acid information based on imaging in the sequencing field, a camera 108 having a lens 104 is disposed above a chip 500, the lens 104 includes an optical axis OP, and the camera 108 and/or the chip 500 can move relative to each other along a direction perpendicular to or parallel to the optical axis OP of the lens. The camera 108 is used to photograph one or more parallel channels 52 (channels) of the chip 500, and the sequencing process includes multiple rounds of photographing a fixed area of the chip 500 by the camera 108, wherein each round of photographing can be called as Fov (field of view), each round of photographing can be called as a cycle, and the two cycles include re-introducing reagents for performing a chemical reaction. Then, for one cycle, two or more Fov may be used as the first objects and the other Fov may be used as the second objects. The camera 108 focuses the plurality of first objects to acquire coordinates of the first objects, and the above relationship is established using the coordinates (focal plane data) of the plurality of first objects. When a second object (other object) is to be photographed, a clear image of the second object can be obtained without focusing on the basis of the above-described relationship. In this example, the camera 108 may be fixed, and the chip 500 moved perpendicular to the lens optical axis OP to bring the difference Fov within the range captured by the lens 104.
In certain embodiments, a method comprises: to obtain a sharp image of at least one of the plurality of first objects without focusing. Therefore, when the clear image of the first object needs to be acquired again, the first object does not need to be focused again, the time is saved, and the imaging efficiency is improved.
In particular, since the lens captures an external scene, the external scene at the same position may be different from time to time. In the relative movement of the lens and the object, after the relationship is established, when the lens shoots the first object at the same position again, a clear image of the first object can be obtained without focusing on the basis of the relationship, so that the image content of the object can be updated in real time, which is particularly important in the field of security protection.
In some embodiments, the trajectory in which the plurality of first objects are located includes a first trajectory and a second trajectory. In this way, the imaging method can be made applicable to a wider range.
Specifically, the first track and the second track may be parallel to each other, and in the above example of the panoramic scanning of the mobile phone 500, referring to fig. 6, the first track X1 and the second track X2 may be two moving tracks before and after a line (or a column) is changed during the moving process of the mobile phone 500. In the above-described sequencing example, the first track 43 and the second track 45 may be located at two channels 52 in parallel, respectively.
Next, the present invention will be explained and explained in detail in order to practice in the field of sequencing.
Referring to fig. 8, the number of the first objects 42 is illustrated as 2. The first track 43 is a linear track, and the 2 first objects 42 and the second object 44 are located at different positions of the linear track, for example, the 2 first objects are respectively located at two ends of the linear track, and the second object 44 can be located at any position between the two ends of the linear track, it is understood that the number of the second objects 44 can be multiple (e.g., 2 or more), the multiple second objects 44 are sequentially arranged on the first track 43, and the second object 44 is located between the 2 first objects 42. In the example of fig. 8, the first direction a is a direction from the left to the right of the chip 500, and one first object 42, the second object 44, and the other first object 42 are sequentially arranged in the first trajectory 43 along the direction a from the left to the right of the chip 500. It is understood that in other examples, the second object 44 may be located in other locations than the location of the first object 42. In other examples, the first trajectory 43 may be a non-linear trajectory, such as a curvilinear trajectory, which may be considered a fit of a plurality of line segments, with the first object 42 and the second object 44 being located on the same line segment in the curvilinear trajectory.
In some embodiments, the relationship established by the plurality of first objects 42 may be a linear relationship. In the present embodiment, please refer to fig. 8, the first track 43 is one or more channels 52 of the chip 500 used in the sequence sequencing process, the object to be imaged is located at one or more positions (FOVs) of the channels 52, the lens and the first track 43 can move relatively along the first direction a during photographing, for example, the lens 104 is fixed, the lens 104 includes an optical axis OP, and the first track 43 moves along a direction perpendicular to the optical axis OP. It will be appreciated that in other examples the first trajectory 43 can move in a direction parallel to the optical axis OP. The first trajectory 43 may be moved according to the actual adjustment requirements.
The imaging system includes a camera 108, the lens 104 is mounted on the camera 108, and the camera 108 captures light passing through the lens 104 for imaging.
In some embodiments, moving the lens 104 and/or the plurality of first objects 42 includes at least one of: the lens 104 is fixed, and the first object 42 is moved; fix the first object 42, move the lens 104; the lens 104 and the first object 42 are moved simultaneously. Moving the lens 104 and/or the plurality of second objects 44 may also be similarly understood. It should be noted that, in the present embodiment, since the first object 42 and the second object 44 are located on the first track, the movement of the first object 42 and/or the second object 44 can be realized by moving the first track, i.e., the chip.
Therefore, the moving modes of the lens 104 and the object are various, the adaptability is strong, and the application range of the imaging method is widened.
Specifically, when the first track 43 is moved, the first track 43 may be placed on a stage, and the stage may drive the first track 43 and the object to translate back and forth along a direction perpendicular to the optical axis OP of the lens 104, so as to place one of the objects below the lens 104, so that the imaging system images the object.
When the lens 104 is moved, the lens 104 may be mounted on a driving mechanism, and the driving mechanism may drive the lens 104 to translate back and forth along a direction perpendicular to an optical axis OP of the lens 104 in an electric or manual manner, so that the lens 104 is moved above one of the objects, and the imaging system images the object.
Moving the lens 104 and the first track 43 at the same time, it can be understood that the lens 104 may be moved first, and then the first track 43 may be moved, so that one of the objects is located below the lens 104; alternatively, the lens 104 may be moved to be located above one of the objects by moving the first track 43 first, or the lens may be located above one of the objects by moving the first track 43 while moving the lens 104.
It will be appreciated that in establishing the above relationship, two first objects 42 may be selected for focus on the first trajectory 43 to obtain the focal plane positions of the two objects. Specifically, it can be seen that the relative focal plane position between two Fov on the first trace 43, and particularly the adjacent Fov, remains unchanged during sequence determination. Thus, the so-called relationship can be established by focusing the two first objects 42, obtaining the focal plane coordinate data of the two first objects 42. With this relationship, it is possible to obtain any second object 44 on the first trajectory 43 without focusing.
Thus, as an example, the two first objects 42 may be the start point and the end point FOV of the first track 43 in one cycle (i.e. the same time period), such as the two end FOVs of the same row of the same channel, respectively, as shown in fig. 8. The second object 44 may be any one or more FOVs between the two first objects 42. It is understood that, basically, the two first objects 42 may also be FOVs at other positions, the second object 44 does not need to be located between the two first objects 42, and only a rule that a straight line (relationship) is determined based on two points is needed, any two positions (objects) on the first trajectory 43 are selected, the focal plane position corresponding to each position is obtained, the relationship corresponding to the first trajectory 43 is obtained according to the focal plane position of each position, and the imaging system can be used to obtain the image of the second object 44 without focusing through the established relationship. In practical application scenarios, a coordinate system may be established to digitize/quantify the relative position relationship including the so-called focal plane position, for example, when the sequence determination platform is used for image signal acquisition, a three-dimensional coordinate system may be established by representing the plane where the first/second locus is located by xy and representing the optical axis direction of the lens (e.g., objective lens) by Z, and the focal plane position of each position includes the focal plane Z value.
It should be noted that the mentioned cycle reflects the influence of the time factor/image acquisition cycle. Generally, in a high-precision imaging system, such as a 60-fold objective microscope system with a depth of field of 200nm, the fluctuation caused by one or more back-and-forth mechanical movements of the first/second tracks or the platform carrying the first/second tracks is likely to exceed the depth of field, so it is better to use the imaging method of any of the above or below embodiments to perform multiple times of multiple object continuous imaging with higher precision, and to make the relationship established based on the re-fitting of the focusing data more accurate and better if multiple objects located on the same track are not in the same image acquisition time period (e.g. in different mechanical movement directions). It will be understood by those skilled in the art that in the case of multi-object continuous imaging with relatively low accuracy, due to the large depth of field, the focus position deviation caused by the mechanical reciprocating motion may not be considered, that is, for a plurality of objects on the same trajectory, in different image acquisition cycles, the relationship determined in any of the previous image acquisition cycles may be used for imaging.
The effect of Z value prediction is shown in fig. 9-11 using the above prediction strategy.
Note that the curve C5 in fig. 9 to 11 is a Z-value curve (focal line formed at the actual focus position) obtained as a result of real shooting by the camera, and shooting is performed only by focusing the camera. The C6 curve is a predicted Z value curve (focal line formed by predicting the in-focus position).
Fig. 9 shows Z value predictions for multiple Fov for one cycle without interference, fig. 10 and 11 show Z value predictions for the case of interference and decoking, with no intervention, successful re-focus following the decoking of fig. 10 and no re-focus following the decoking of fig. 11.
For objects located in the second trajectory, the position selections of the first object and the second object may be the same as or different from the position selections of the first object and the second object in the first trajectory. In one example, since the objects located on the first trajectory are sequentially imaged in the first direction a, the objects may be sequentially imaged in the second direction B (right-to-left direction from the chip 500) when the objects located on the second trajectory are imaged.
Further, after the clear image of one or more second objects 44 on the first track 43 is obtained, the lens 104 and the chip 500 are relatively moved along the third direction C, i.e. the direction perpendicular to the extending direction of the channel 52, so that the lens 104 is located above the second track 45, and then the clear image of the second object 44 is obtained without focusing by using the imaging system according to the relationship established by the plurality of first objects on the second track. In the illustrated embodiment, the third direction C is perpendicular to the first direction a and the second direction B.
Further, in the example shown in fig. 8, the first tracks 43 and the second tracks 45 are alternately arranged from top to bottom, after the image acquisition of the one or more second objects 44 of the first track 43 from top to bottom is completed, the lens 104 and the chip 500 are moved relatively, the lens 104 is positioned above the second objects 44 of the first second track 45, and then clear images of the one or more second objects 44 of the first second track 45 are acquired. Then, the lens 104 and the chip 500 are moved relatively, so that the lens 104 is located above the second object 44 of the second first track 43, and then the sharp image of the second object 44 of the second first track 43 is acquired until the sharp image acquisition of all the objects on the first track 43 and the second track 45 is completed.
In summary, since the focal plane position (e.g., Z value) of the object of the image to be acquired is predicted according to the relationship, the imaging of other FOVs is performed without focusing, which improves the imaging efficiency and accuracy. Further, the determination method is applied to the photographing process of the same area and the similar area, and the rapid and continuous image acquisition of multiple objects can be realized. And the focusing process can be omitted in the continuous shooting process, so that the rapid scanning shooting is realized. Furthermore, by matching with an automatic focus tracking system of the camera and using a focal plane prediction technology, better picture quality can be obtained, and the problem that the camera cannot track the focus again after interference and decoking can be solved. In a wider sense, the camera can have certain intelligence by using a focal plane prediction technology, the focusing process can be quickly realized by assisting the focusing process according to the priori knowledge, and even the focusing process is omitted. Especially in the process of shooting, the intelligence has more important extended application.
In some embodiments, the imaging system includes an imaging device 102 and a stage, the imaging device 102 includes a lens 104 and a focusing module 106, the lens 104 includes an optical axis OP, the lens 104 is capable of moving along the optical axis OP for focusing, and the first object 42 and the second object 44 are located on the stage.
Hereinafter, the focusing process when the relationship is established by using the coordinates of the plurality of first objects 42 according to the present invention will be described with specific embodiments. It should be noted that the same named elements used in different embodiments are limited to the explanation of the embodiments unless otherwise specified, and the same named elements in different embodiments should not be understood in a cross or confusing manner.
The first embodiment is as follows:
referring to fig. 12-14, focusing includes the following steps: (a) emitting light onto the first object by using the focusing module; (b) moving the lens to a first set position; (c) moving the lens from a first set position to a first object by a first set step length and judging whether the focusing module receives light reflected by the first object; (d) when the focusing module receives light reflected by the first object, the lens is moved from the current position to a second set position, the second set position is located in a first range, and the first range comprises the current position and allows the lens to move along the direction of the optical axis; (e) moving the lens from a second set position by a second set step length, and obtaining an image of the first object by using the imaging device at each step position, wherein the second set step length is smaller than the first set step length; (f) and evaluating the image of the first object, and realizing focusing according to the obtained image evaluation result.
By utilizing the imaging method, the clear imaging plane of the target object, namely the clear plane/clear plane, can be quickly and accurately found. The method is particularly suitable for devices containing precise optical systems, such as optical detection devices with high power lenses, where clear planes are not easily found. Thus, the cost can be reduced.
Specifically, in the focusing step, the first object is an object that needs to obtain a focal plane position, for example, in the sequencing field, if a relationship between objects in the first track 43 needs to be established, two first objects may be selected in the first track 43, and the two first objects located in the first track 43 are focused sequentially or simultaneously to obtain two sets of focal plane position data, and a relationship is established according to the two sets of focal plane position data; if the relationship of the objects in the second track needs to be established, two first objects can be selected in the second track for focusing, focal plane position data of the two first objects are obtained, and the relationship is established according to the two sets of focal plane position data.
Referring to fig. 13 and 14, in the embodiment of the present invention, the objects are a plurality of positions (FOVs) of the sample 300 applied in the sequence determination, and particularly, when the relationship is established, the object in which focusing is performed may be the first object. The sample 300 includes a carrier 200 and a sample 302 to be tested located on the carrier, the sample 302 to be tested is a biomolecule, such as a nucleic acid, etc., and the lens 104 is located above the carrier 200. The carrier 200 has a front panel 202 and a back panel (lower panel), each having two surfaces, and a sample 302 to be tested is attached to the upper surface of the lower panel, i.e., the sample 302 to be tested is located below the lower surface 204 of the front panel 202. In the embodiment of the present invention, since the imaging device 102 is used to acquire an image of the sample 302 to be detected, the sample 302 to be detected is the corresponding position (FOV) when taking a picture, and the sample 302 to be detected is located below the lower surface 204 of the front panel 202 of the carrying device 200, when the focusing process starts, the lens 104 moves to find the medium interface 204 where the sample 302 to be detected is located, so as to improve the success rate of acquiring a clear image by the imaging device 102. In the embodiment of the present invention, the sample 302 to be tested is a solution, the front panel 202 of the carrier 200 is glass, and the medium interface 204 between the carrier 200 and the sample 302 to be tested is the lower surface 204 of the front panel 202 of the carrier 200, i.e. the interface between the glass and the liquid. The sample 302 to be tested whose image needs to be acquired by the imaging device 102 is located below the lower surface 204 of the front panel 202, and the clear surface for clearly imaging the sample 302 to be tested is determined and found according to the image acquired by the imaging device 102, which may be referred to as focusing. In one example, the front panel 202 has a thickness of 0.175 mm.
In other embodiments, the carrier 200 can be a slide, with the sample 302 to be tested placed on the slide, or the sample 302 to be tested can be clamped between two slides. In another embodiment, the carrier 200 may be a reaction device, such as a chip with a sandwich structure having a carrier panel on and under, and the sample 302 to be tested is disposed on the chip.
In the present embodiment, referring to fig. 14, the imaging device 102 includes a microscope 107 and a camera 108, the lens 104 includes an objective lens 110 of the microscope and a camera lens 112, the focusing module 106 can be fixed with the camera lens 112 by a dichroic beam splitter 114(dichroic beam splitter), and the dichroic beam splitter 114 is located between the camera lens 112 and the objective lens 110. The dichroic beam splitter 114 includes a dual C-shaped beam splitter (dual C-mount splitter). The dichroic beam splitter 114 reflects light emitted from the focusing module 106 to the objective lens 110 and allows visible light to pass through and enter the camera 108 through the camera lens 112, as shown in fig. 14.
In the embodiment of the present invention, the movement of the lens 104 is along the optical axis OP. The movement of the lens 104 may refer to the movement of the objective lens 110, and the position of the lens 104 may refer to the position of the objective lens 110. In other embodiments, other lenses of the lens 104 may be selected to be moved to achieve focus. In addition, the microscope 107 further includes a tube lens 111(tube lens) between the objective lens 110 and the camera 108.
In this embodiment, the stage can drive the sample 200 to move (e.g., XY plane) in a plane perpendicular to the optical axis OP (e.g., Z axis) of the lens 104, and/or can drive the sample 300 to move along the optical axis OP (e.g., Z axis) of the lens 104.
In other embodiments, the plane in which the stage drives the sample 300 to move is not perpendicular to the optical axis OP, i.e. the included angle between the motion plane of the sample and the XY plane is not 0, and the imaging method is still applicable.
In addition, the imaging device 102 can also drive the objective lens 110 to move along the optical axis OP of the lens 104 for focusing. In some examples, the imaging device 102 drives the objective lens 110 to move using an actuator such as a stepper motor or a voice coil motor.
In the present embodiment, when establishing the coordinate system, as shown in fig. 13, the positions of the objective lens 110, the stage, and the sample 300 may be set on the negative axis of the Z-axis, and the first set position may be a coordinate position on the negative axis of the Z-axis. It is understood that, in other embodiments, the relationship between the coordinate system and the camera and the objective lens 110 may be adjusted according to actual situations, and is not limited in particular.
In one example, the imaging device 102 comprises a total internal reflection fluorescence microscope, the objective lens 110 is at 60 times magnification, and the first set step size S1 is 0.01 mm. Thus, the first setting step S1 is more suitable, since S1 is too large to cross the acceptable focusing range, and S1 is too small to increase the time overhead.
When the focusing module 106 does not receive the light reflected from the first object, the lens 104 is moved to the sample 300 and the first object by the first set step.
In this embodiment, the imaging system is applicable to, or comprises, a sequencing system.
In this embodiment, with reference to the current position, the first range includes a first section and a second section that are opposite to each other, and the second section is defined to be closer to the first object, and the step (e) includes: (i) when the second set position is located in the second section, the lens is moved from the second set position to a direction away from the first object, and an image of the first object is acquired by the imaging device at each step position; or (ii) when the second set position is in the first zone, moving the lens from the second set position in a direction to approach the first subject, and acquiring an image of the first subject at each step position by the imaging device. Therefore, the movement of the lens can be controlled according to the specific position of the second set position, and the required image can be rapidly acquired.
Specifically, in one example, the current position may be set as the origin oPos and the coordinate axis Z1 may be established in the optical axis direction of the lens, the first zone being a positive zone and the second zone being a negative zone. The range between positive and negative ranges ± rLen, that is, the first range is [ oPos + rLen, oPos-rLen ]. The second setting position is located in the negative interval and the second setting position is (oPos-3 × r 0). r0 denotes a second setting step. The imaging device begins image acquisition at (oPos-3 x r0) and moves away from the first object.
It should be noted that the coordinate axis Z1 established in the above example coincides with the Z axis of fig. 13, and the first range is located in the negative range of the Z axis. This simplifies the control of the imaging method, and for example, the correspondence between the position of the lens on the coordinate axis Z1 and the position on the Z axis can be known only by knowing the positional relationship between the origin of the Z axis and the origin oPos.
In this embodiment, step (f) includes: comparing the image evaluation result with a preset condition, and if the image evaluation result meets the preset condition, saving the position of the lens 104 corresponding to the image; if the image evaluation result does not satisfy the preset condition, the lens 104 is moved to a third setting position, and the third setting position is located in another interval of the first range, which is different from the interval where the second setting position is located, that is, the reverse photographing focusing is started. For example, in the process of the part (i) of the step (e), the image evaluation results do not meet the preset conditions; moving the lens 104 to the third setting position corresponds to moving the lens to the start position of the part (ii) to be performed in step (e), and then performing reverse photograph focusing, that is, performing the part (ii) process in step (e). Therefore, the focusing position of the image is searched in the first range, and the efficiency of the imaging method is effectively improved.
Specifically, referring to the example of the above embodiment, the second setting position is located at (oPos-3 × r0) in the negative interval, the lens is moved upward from the second setting position, the imaging device 102 performs image capturing at each step position, if the image evaluation result does not satisfy the preset condition, the lens 104 is moved to a third setting position located in the positive interval, for example, the third setting position is (oPos +3 × r0), then the imaging device 102 performs image capturing from (oPos +3 × r0) and moves in a direction to approach the first object, and focusing is achieved according to the obtained image evaluation result. When the image evaluation result satisfies the preset condition, the current position of the lens 104 corresponding to the image is saved as the saving position, so that the imaging device 102 can output a clear image when the sequence determination reaction is performed for photographing.
In the present embodiment, the image evaluation result includes a first evaluation value and a second evaluation value, the second setting step includes a coarse step and a fine step, and the step (f) includes: the lens is moved in a coarse step until the first evaluation value of the image at the corresponding position is not more than the first threshold value, the lens 104 is changed to a fine step to continue moving until the second evaluation value of the image at the corresponding position is maximized, and the position of the lens 104 corresponding to the image at which the second evaluation value is maximized is saved. Thus, a coarse step size may allow the lens 104 to quickly approach the in-focus position, and a fine step size may ensure that the lens 104 can reach the in-focus position.
Specifically, the position of the lens 104 corresponding to the image of the largest second evaluation value may be saved as the in-focus position. At each step position of image acquisition by the imaging device 102, a first evaluation value and a second evaluation value are calculated for the acquired image.
In one example, during sequencing, the object is provided with an optically detectable label, such as a fluorescent label, the fluorescent molecule can be excited to fluoresce under the irradiation of laser light with a specific wavelength, and the image collected by the imaging device 102 includes a bright spot that may correspond to the position of the fluorescent molecule. It can be understood that when the lens 104 is located at the in-focus position, the size of the bright spot corresponding to the position of the fluorescent molecule in the collected image is small and the brightness is high; when the lens 104 is located at the out-of-focus position, the size of the bright spot corresponding to the position of the fluorescent molecule in the acquired image is large and the brightness is low.
In this embodiment, the size of the bright spot on the image and the intensity of the bright spot are used to evaluate the image.
For example, the size of a bright spot of an image is reflected with the first evaluation value; in one example, the first evaluation value is determined by counting the connected component size of a bright spot on an image, and a connected pixel (connected component) larger than the average pixel value of the image is defined as one connected component. The first evaluation value may be determined, for example, by calculating the size of the corresponding connected component of each bright patch, and taking the average value of the sizes of the connected components of the bright patches representing a characteristic of the image as the first evaluation value of the image; for another example, the sizes of the connected components corresponding to the bright spots may be sorted from small to large, and the size of the connected component at 50, 60, 70, 80, or 90 quantiles may be used as the first evaluation value of the image.
In one example, the size of the connected component Area corresponding to a bright spot of one of the images is a × B, where a represents the size of the connected component in the row centered on the center of the matrix corresponding to the bright spot, and B represents the size of the connected component in the column centered on the center of the matrix corresponding to the bright spot. The matrix corresponding to the defined bright spots is a matrix k1 × k2 formed by odd rows and odd columns, and comprises k1 × k2 pixel points.
In one example, the image is first binarized, the image is converted into a digital matrix, and then the size of the connected component is calculated. For example, with the average pixel value of the image as a reference, a pixel point not smaller than the average pixel value is denoted by 1, and a pixel point smaller than the average pixel value is denoted by 0, as shown in fig. 15. In fig. 15, the bold and enlarged indicates the center of the matrix corresponding to the bright spot, and the bold frame indicates a 3 × 3 matrix. And the connected pixel points marked as 1 form a connected domain, and the size of the connected domain corresponding to the bright spot is A, B and 3, 6.
The first threshold may be set empirically or a priori. In one example, the first evaluation value reflects the size of a bright spot on an image, and the inventor observes that the Area of the connected component becomes smaller and larger during the process from being close to the clear plane to being far away from the clear plane, and the inventor determines the first threshold value based on the size of the Area value and the change rule during focusing for finding the clear plane multiple times. In one example, the first threshold is set to 260. It is noted that the first threshold may have an association with the coarse step size, fine step size settings: the first threshold can be sized to not cross the focal plane of the imaging device when imaging the object in a coarse step.
In some embodiments, the second evaluation value or the third evaluation value is determined by counting scores of the flare of the images, Score of the flare of one image ((k1 × k2-1) CV-EV)/((CV + EV)/(k1 × k2)), CV represents a central pixel value of the matrix corresponding to the flare, and EV represents a sum of non-central pixel values of the matrix corresponding to the flare. Thus, the second evaluation value or the third evaluation value can be determined.
Specifically, after the bright spots of the image are determined, the Score values of all the bright spots of the image may be sorted in ascending order. When the number of bright spots is larger than a preset number, for example, the preset number is 30, the number of bright spots is 50, and the second evaluation value may take a Score value of 50, 60, 70, 80, or 90 quantiles, so that interference of 50%, 60%, 70%, 80%, or 90% of the bright spots of relatively poor quality can be excluded; generally, the central and edge intensities/pixel values are considered to have a large difference and the converged bright spot is a bright spot corresponding to the molecule to be detected. The molecule to be detected may be a nucleic acid molecule corresponding to a target detection target in nucleic acid detection.
When the number of the bright spots is smaller than the preset number, for example, the number of the bright spots is 10 smaller than the preset number, so that the number of the bright spots is small and has no statistical significance, the bright spot with the largest Score value is taken to represent the image, that is, the Score value of one percentile is taken as the third evaluation value.
In the present embodiment, the image evaluation result includes a first evaluation value, a second evaluation value, and a third evaluation value, and the image includes a plurality of pixels; the preset condition is that the number of the bright spots on the image is larger than a preset value, the first evaluation value of the image at the corresponding position is not larger than a first threshold value, and the second evaluation value of the image at the corresponding position is the largest among the second evaluation values of the N images before and after the image at the corresponding position; or the preset condition is that the number of the bright spots on the image is smaller than the preset value, the first evaluation value of the image at the corresponding position is not larger than the first threshold, and the third evaluation value of the image at the corresponding position is the largest among the third evaluation values of the N images before and after the current image. Therefore, different evaluation values are adopted for evaluation according to the number of the bright spots of the image, so that the focusing of the imaging method is more accurate.
Specifically, in one example, the first evaluation value may be a connected component size corresponding to a bright patch of the image in the above embodiment. In the example where the second evaluation value and the third evaluation value are different, the Score quantiles are different depending on whether the number of bright spots has or does not have a statistical significance, for example, a Score value of a non-percentile and a Score value of a percentile, respectively.
In one example, where single molecule sequencing is performed, the bright spots on the collected image may be from one or more optically detectable labeled molecules carried by the sample to be tested, or from other interferences.
In this embodiment, the bright spots are detected, and the bright spots corresponding to/from the labeled molecules are detected, for example, using a k1 × k2 matrix. Specifically, the bright spots on the image are detected using the following method:
and performing bright spot detection on the image by using a k1 x k2 matrix, wherein the matrix which judges that the central pixel value of the matrix is not less than any pixel value of the non-center of the matrix corresponds to a bright spot, both k1 and k2 are odd numbers which are more than 1, and the k1 x k2 matrix comprises k1 x k2 pixel points.
The method is based on the difference between the brightness/intensity of the signal generated by fluorescence and the background brightness/intensity, and can simply and quickly detect the information from the labeling molecule signal. In some embodiments, the central pixel value of the matrix is greater than the first predetermined value, and any pixel value not in the center of the matrix is greater than the second predetermined value.
The first preset value and the second preset value can be set according to experience or pixel/intensity data of normal bright spots of a certain amount of normal images, and the named "normal images" and "normal bright spots" can be images obtained by an imaging system at a clear surface position and can be normally seen by naked eyes, for example, the images are clear, the background is cleaner, the sizes and the brightness of the bright spots are uniform, and the like. In one embodiment, the first and second preset values are related to an average pixel value of the image. For example, by setting the first preset value to be 1.4 times the average pixel value of the image and setting the second preset value to be 1.1 times the average pixel value of the image, it is possible to eliminate interference and obtain a bright spot detection result from the mark.
Specifically, in one example, the image is a color image, one pixel of the color image has three pixel values, and the color image can be converted into a gray image, and then image detection is performed, so as to reduce the calculation amount and complexity of the image detection process. The non-grayscale image may be optionally, but not limited to, converted to a grayscale image using a floating-point algorithm, an integer method, a shift method, or an average value method, etc. Of course, color images can also be detected directly, the above-mentioned size comparison of pixel values can be regarded as a size comparison of three-dimensional values or an array having three elements, and the relative sizes of a plurality of multi-dimensional values can be customized according to experience and needs, for example, when any two-dimensional value in the three-dimensional value a is larger than the corresponding dimension of the three-dimensional value b, the three-dimensional value a can be regarded as being larger than the three-dimensional value b.
In another example, the image is a grayscale image, and the pixel values of the grayscale image are the same as the grayscale values. Therefore, the average pixel value of the image is the average gray value of the image.
In one example, the first threshold is 260, the preset number is 30, and N is 2. That is, when the first evaluation value of the image of the corresponding position is not more than 260 and the number of the bright spots is more than 30, the second evaluation value of the image of the corresponding position is statistically obtained, the position of the image at which the second evaluation value is the largest is determined to be the clear position, and there are 2 positions before and after the position that meet the following conditions: the second evaluation value of the corresponding image is greater than zero. When the first evaluation value of the image of the corresponding position is not more than 260 and the number of the bright spots is less than 30, counting the third evaluation value of the image of the corresponding position, and finding out the position of the image with the maximum third evaluation value as the clear surface position, wherein 2 positions in front of and behind the position satisfy the following conditions: the third evaluation value of the corresponding image is greater than zero.
And if the image meeting the condition is not found, judging that the image evaluation result does not meet the preset condition.
In one example, k 1-k 2-3, then there are 9 pixels in the 3 x 3 matrix and EV is the sum of the non-central 8 pixel values.
In the present embodiment, if focusing cannot be completed according to the image evaluation result, the lens is moved to the next image capture area (FOV) of the subject in the direction perpendicular to the optical axis for focusing. Therefore, refocusing can be carried out from other first objects, so that focusing is avoided when the current first object which cannot be focused is always focused, and time is saved.
In this embodiment, the imaging method further includes: and when the number of the current first objects which are not successfully focused is larger than the preset number, prompting that the focusing is failed. Therefore, the reason of focusing failure can be manually eliminated, and the focusing is avoided all the time, so that the time is saved. In particular, in this case, there may be a cause of misalignment of the object placement or malfunction of the imaging apparatus. After the failure of focusing is prompted, the reason of the failure of focusing can be manually eliminated. In one example, the preset number is 3, that is, when the number of the current first objects for which focusing is unsuccessful is greater than 3, focusing failure is prompted. The focusing failure can be prompted by displaying images, characters, playing sound and the like.
In this embodiment, the imaging method further includes: and judging whether the position of the lens exceeds the first range or not, and exiting focusing when the position of the lens exceeds the first range. Therefore, the lens is out of focus when the position of the lens exceeds the first range, and the problems of overlong focus time and increased power consumption can be avoided.
Specifically, in the example of the above embodiment, the first range is [ oPos + rLen, oPos-rLen ].
In this embodiment, when the lens 104 moves, it is determined whether the current position of the lens 104 exceeds a fourth setting position; when the current position of the lens 104 exceeds the fourth set position, the movement of the lens 104 is stopped. Thus, the first setting position and the fourth setting position can limit the moving range (first range) of the lens 104, so that the lens 104 can stop moving when focusing is failed, waste of resources or damage of equipment is avoided, or refocusing can be performed on the lens 104 when focusing is failed, and automation of the imaging method is improved.
In a total internal reflection imaging system, for example, to quickly find the medium interface, the settings are adjusted so that the range of motion of the lens 104 is as small as is sufficient for implementing the solution. For example, in a total internal reflection imaging device with a 60-fold objective lens, the movement range of the lens 104 can be set to 200 μm ± 10 μm or [190 μm, 250 μm ] according to the optical path characteristics and empirical summary.
In this embodiment, another setting position can be determined according to the determined moving range and the setting of any one of the fourth setting position and the first setting position. In one example, the fourth setting position is set to a position corresponding to a depth of field from the lowest position of the upper surface 205 of the front panel 202 of the reaction apparatus 200 to the next position, and the moving range of the lens 104 is set to 250 μm, so that the first setting position is determined. In the present example, the coordinate position corresponding to the position of the next depth of field size is a position that becomes smaller in the negative Z-axis direction.
Specifically, in the present embodiment, the movement range is one section on the negative axis of the Z axis. In one example, the first set position is nearlimit, the fourth set position is farlimit, and the coordinate positions corresponding to nearlimit and farlimit are both located on the negative axis of the Z-axis, where nearlimit is-6000 um, and farlimit is-6350 um. The size of the range of motion defined between nearlimit and farlimit is 350 um. Therefore, when the coordinate position corresponding to the current position of the lens 104 is smaller than the coordinate position corresponding to the fourth setting position, it is determined that the current position of the lens 104 exceeds the fourth setting position. In fig. 13, the position of farlimit is the position of the depth of field L next to the lowest position of the upper surface 205 of the front panel 202 of the reaction apparatus 200. The depth of field L is the size of the depth of field of the lens 104.
It should be noted that, in other embodiments, the coordinate position corresponding to the first setting position and/or the fourth setting position may be specifically set according to the actual situation, and is not specifically limited herein.
In the present embodiment, the focusing module 106 includes a light source 116 and a light sensor 118, the light source 116 is used for emitting light onto the first object 42, and the light sensor 118 is used for receiving the light reflected by the first object 42. Thus, the light emitting and receiving of the focusing module 106 can be realized.
Specifically, in the embodiment of the present invention, the light source 116 may be an infrared light source 116, and the light sensor 118 may be a photo diode (photo diode), so that the cost is low and the accuracy of the detection is high. Infrared light emitted by the light source 116 enters the objective lens 110 via reflection by the dichroic beamsplitter and is projected through the objective lens 110 onto the sample 300 and the object. The object may reflect the infrared light projected through the objective lens 110. In an embodiment of the present invention, when the sample 300 comprises the carrier 200 and the sample 302 to be measured, the received light reflected by the object is the light reflected by the lower surface 204 of the front panel of the carrier 200.
Whether infrared light reflected by the object can enter the objective lens 110 and be received by the light sensor 118 depends primarily on the distance of the objective lens 110 from the object. Therefore, when the focusing module 106 determines that the infrared light reflected by the object is received, it can be determined that the distance between the objective lens 110 and the object is within the suitable range for optical imaging, and the distance can be used for imaging of the imaging device 102. In one example, the distance is 20-40 um.
At this time, the lens 104 is moved by a second set step smaller than the first set step, so that the imaging system can find the optimal imaging position of the lens 104 in a smaller range.
In this embodiment, referring to fig. 16, when the focusing module 106 receives the light reflected by the first object, the imaging method further includes the steps of: g, the lens 104 is moved to the first object by a third set step which is smaller than the first set step and larger than the second set step, and a first light intensity parameter is calculated according to the light intensity of the light received by the focusing module 106, and whether the first light intensity parameter is larger than a first set light intensity threshold is judged; and (d) when the first light intensity parameter is larger than the first set light intensity threshold value. In this way, by comparing the first light intensity parameter with the first set light intensity threshold, the interference of the light signal with very weak contrast with the reflected light of the medium interface to focusing can be eliminated.
When the first light intensity parameter is not greater than the first set light intensity threshold, the lens 104 is caused to continue moving toward the first object at a third set step length.
In the present embodiment, the focusing module 106 includes two light sensors 118, the two light sensors 118 are used for receiving the light reflected by the first object, and the first light intensity parameter is an average value of the light intensities of the light received by the two light sensors 118. In this way, the first light intensity parameter is calculated by the average of the light intensities of the light received by the two light sensors 118, so that it is more accurate to exclude weak light signals.
Specifically, the first light intensity parameter may be set to SUM, i.e., SUM ═ PD1+ PD2)/2, and PD1 and PD2 respectively indicate the light intensities of the light received by the two light sensors 118. In one example, the first set light intensity threshold nSum is 40.
In one example, the third set step size S2 is 0.005 mm. It is understood that, in other examples, the third setting step may also take other values, and is not limited in particular.
Example two:
it should be noted that in this embodiment, the structural diagram of the imaging system in the first embodiment can be adopted as the structural diagram of the imaging system in the first embodiment, and it can be understood that the focusing method or the focusing logic in the second embodiment is different from that in the first embodiment, but the used structure of the imaging system is basically the same.
Referring to fig. 13, 14 and 17, focusing includes the following steps: s11, emitting light onto the first object by the focusing module 106; s12, moving the lens 104 to a first setting position; s13, moving the lens 104 from the first setting position to the first object at the first setting step length and determining whether the focusing module 106 receives the light reflected by the first object; when the focusing module 106 receives the light reflected by the first object, S14, the lens 104 is moved by a second set step smaller than the first set step, and the imaging device 102 is used to capture an image of the first object, and whether the sharpness value of the image captured by the imaging device 102 reaches a set threshold value is determined; when the sharpness value of the image reaches the set threshold value, S15 saves the current position of the lens 104 as the save position.
By utilizing the focusing method, the plane of the target object which is clearly imaged, namely a clear plane/clear plane, can be quickly and accurately found. The method is particularly suitable for devices containing precise optical systems, such as optical detection devices with high power lenses, where clear planes are not easily found.
Specifically, in the focusing step, the first object 42 is an object that needs to obtain a focal plane position, for example, in the sequencing field, if a relationship of objects on the first track 43 needs to be established, two first objects 42 may be selected on the first track 43, and two first objects 42 located on the first track 43 are focused sequentially or simultaneously to obtain two sets of focal plane position data, and a relationship is established according to the two sets of focal plane position data; if the relationship between the objects in the second track needs to be established, two first objects 42 can be selected for focusing on the second track, the focal plane position data of the two first objects 42 can be obtained, and the relationship can be established according to the two sets of focal plane position data.
Referring to fig. 13, in the embodiment of the present invention, the objects are a plurality of positions (FOVs) of the sample 300 applied in the sequence determination, and particularly, the object for focusing may be the first object when the relationship is established. The sample 300 includes a carrier 200 and a sample 302 to be tested located on the carrier, the sample 302 to be tested is a biomolecule, such as a nucleic acid, etc., and the lens 104 is located above the carrier 200. The carrier 200 has a front panel 202 and a back panel (lower panel), each having two surfaces, and a sample 302 to be tested is attached to the upper surface of the lower panel, i.e., the sample 302 to be tested is located below the lower surface 204 of the front panel 202. In the embodiment of the present invention, since the imaging device 102 is used to acquire an image of the sample 302 to be detected, the sample 302 to be detected is the corresponding position (FOV) when taking a picture, and the sample 302 to be detected is located below the lower surface 204 of the front panel 202 of the carrying device 200, when the focusing process starts, the lens 104 moves to find the medium interface 204 where the sample 302 to be detected is located, so as to improve the success rate of acquiring a clear image by the imaging device 102. In the embodiment of the present invention, the sample 302 to be tested is a solution, the front panel 202 of the carrier 200 is glass, and the medium interface 204 between the carrier 200 and the sample 302 to be tested is the lower surface 204 of the front panel 202 of the carrier 200, i.e. the interface between the glass and the liquid. The sample 302 to be tested whose image needs to be acquired by the imaging device 102 is located below the lower surface 204 of the front panel 202, and the clear surface for clearly imaging the sample 302 to be tested is determined and found according to the image acquired by the imaging device 102, which may be referred to as focusing. In one example, the front panel 202 has a thickness of 0.175 mm.
In this embodiment, the carrier 200 can be a slide, and the sample 302 to be tested is placed on the slide, or the sample 302 to be tested is clamped between two slides. In some embodiments, the carrier 200 may be a reaction device, such as a chip with a sandwich structure having a carrier panel on and under, and the sample 302 to be tested is disposed on the chip.
In the present embodiment, referring to fig. 14, the imaging device 102 includes a microscope 107 and a camera 108, the lens 104 includes an objective lens 110 of the microscope and a camera lens 112, the focusing module 106 can be fixed with the camera lens 112 by a dichroic beam splitter 114(dichroic beam splitter), and the dichroic beam splitter 114 is located between the camera lens 112 and the objective lens 110. The dichroic beam splitter 114 includes a dual C-shaped beam splitter (dual C-mount splitter). The dichroic beam splitter 114 reflects light emitted from the focusing module 106 to the objective lens 110 and allows visible light to pass through and enter the camera 108 through the camera lens 112, as shown in fig. 14.
In the embodiment of the present invention, the movement of the lens 104 is along the optical axis OP. The movement of the lens 104 may refer to the movement of the objective lens 110, and the position of the lens 104 may refer to the position of the objective lens 110. In other embodiments, other lenses of the lens 104 may be selected to be moved to achieve focus. In addition, the microscope 107 further includes a tube lens 111(tube lens) between the objective lens 110 and the camera 108.
In this embodiment, the stage can drive the sample 200 to move (e.g., XY plane) in a plane perpendicular to the optical axis OP (e.g., Z axis) of the lens 104, and/or can drive the sample 300 to move along the optical axis OP (e.g., Z axis) of the lens 104.
In other embodiments, the plane in which the stage drives the sample 300 to move is not perpendicular to the optical axis OP, i.e. the included angle between the motion plane of the sample and the XY plane is not 0, and the imaging method is still applicable.
In addition, the imaging device 102 can also drive the objective lens 110 to move along the optical axis OP of the lens 104 for focusing. In some examples, the imaging device 102 drives the objective lens 110 to move using an actuator such as a stepper motor or a voice coil motor.
In the present embodiment, when establishing the coordinate system, as shown in fig. 13, the positions of the objective lens 110, the stage, and the sample 300 may be set on the negative axis of the Z-axis, and the first set position may be a coordinate position on the negative axis of the Z-axis. It is understood that, in other embodiments, the relationship between the coordinate system and the camera and the objective lens 110 may be adjusted according to actual situations, and is not limited in particular.
In one example, the imaging device 102 comprises a total internal reflection fluorescence microscope, the objective lens 110 is at 60 times magnification, and the first set step size S1 is 0.01 mm. Thus, the first setting step S1 is more suitable, since S1 is too large to cross the acceptable focusing range, and S1 is too small to increase the time overhead.
When the focusing module 106 does not receive the light reflected by the first object, the lens 104 is moved further along the optical axis OP toward the sample 300 and the first object by a first set step.
In the present embodiment, when the sharpness value of the image does not reach the set threshold, the lens 104 is moved continuously along the optical axis OP by a second set step.
In this embodiment, the imaging system is applicable to, or comprises, a sequencing system.
In this embodiment, when the lens 104 moves, it is determined whether the current position of the lens 104 exceeds a second set position; when the current position of the lens 104 exceeds the second set position, the moving of the lens 104 is stopped or a focusing step is performed. Thus, the first setting position and the second setting position can limit the moving range of the lens 104, so that the lens 104 can stop moving when focusing is not successful, thereby avoiding waste of resources or damage of equipment, or refocusing the lens 104 when focusing is not successful, and improving automation of the imaging method.
In a total internal reflection imaging system, for example, to quickly find the medium interface, the settings are adjusted so that the range of motion of the lens 104 is as small as is sufficient for implementing the solution. For example, in a total internal reflection imaging device with a 60-fold objective lens, the movement range of the lens 104 can be set to 200 μm ± 10 μm or [190 μm, 250 μm ] according to the optical path characteristics and empirical summary.
In this embodiment, another setting position can be determined according to the determined moving range and the setting of any one of the second setting position and the first setting position. In one example, the second setting position is set to a position corresponding to a depth of field from the lowest position of the upper surface 205 of the front panel 202 of the reaction apparatus 200 to the next position, and the moving range of the lens 104 is set to 250 μm, so that the first setting position is determined. In the present example, the coordinate position corresponding to the position of the next depth of field size is a position that becomes smaller in the negative Z-axis direction.
Specifically, in the embodiment of the present invention, the movement range is one section on the negative axis of the Z axis. In one example, the first set position is nearlimit, the second set position is farlimit, and the coordinate positions corresponding to nearlimit and farlimit are both located on the negative axis of the Z-axis, where nearlimit is-6000 um, and farlimit is-6350 um. The size of the range of motion defined between nearlimit and farlimit is 350 um. Therefore, when the coordinate position corresponding to the current position of the lens 104 is smaller than the coordinate position corresponding to the second set position, it is determined that the current position of the lens 104 exceeds the second set position. In fig. 13, the position of farlimit is the position of the depth of field L next to the lowest position of the upper surface 205 of the front panel 202 of the reaction apparatus 200. The depth of field L is the size of the depth of field of the lens 104.
It should be noted that, in other embodiments, the coordinate position corresponding to the first setting position and/or the second setting position may be specifically set according to the actual situation, and is not specifically limited herein.
In the present embodiment, the focusing module 106 includes a light source 116 and a light sensor 118, the light source 116 is used for emitting light onto the first object, and the light sensor 118 is used for receiving the light reflected by the first object. Thus, the light emitting and receiving of the focusing module 106 can be realized.
Specifically, in the embodiment of the present invention, the light source 116 may be an infrared light source 116, and the light sensor 118 may be a photo diode (photo diode), so that the cost is low and the accuracy of the detection is high. Infrared light emitted by the light source 116 enters the objective lens 110 via reflection by the dichroic beamsplitter and is projected through the objective lens 110 onto the sample 300 and the object. The first object may reflect the infrared light projected through the objective lens 110. In an embodiment of the present invention, when the sample 300 comprises the carrier 200 and the sample 302 to be measured, the received light reflected by the first object is the light reflected by the lower surface 204 of the front panel of the carrier 200.
Whether infrared light reflected by the first object can enter the objective lens 110 and be received by the light sensor 118 depends primarily on the distance of the objective lens 110 from the object. Therefore, when the focusing module 106 determines that the infrared light reflected by the first object is received, it can be determined that the distance between the objective lens 110 and the first object is within the suitable range for optical imaging, and the distance can be used for imaging of the imaging device 102. In one example, the distance is 20-40 um.
At this time, the lens 104 is moved by a second set step smaller than the first set step, so that the imaging system can find the optimal imaging position of the lens 104 in a smaller range.
In the present embodiment, the sharpness value of the image may be used as an evaluation value (evaluation value) of image focusing. In one example, determining whether the sharpness value of the image captured by the imaging device 102 reaches a set threshold may be performed by a hill-climbing algorithm of image processing. Whether the sharpness value reaches the maximum value at the peak of the sharpness value is judged by calculating the sharpness value of the image output by the imaging device 102 when the objective lens 110 is at each position, and whether the lens 104 reaches the position of the clear plane when the imaging device 102 images is judged. It will be appreciated that in other embodiments, other image processing algorithms may be used to determine whether the sharpness value has reached a maximum at the peak.
When the sharpness value of the image reaches the set threshold value, the current position of the lens 104 is stored as the storage position, so that the imaging device 102 can output a sharp image when the sequence measurement reaction is taken.
In this embodiment, referring to fig. 18, when the focusing module 106 receives the light reflected by the object, the focusing further includes the steps of: s16, moving the lens 104 to the first object by a third setting step that is smaller than the first setting step and larger than the second setting step, calculating a first light intensity parameter according to the light intensity of the light received by the focusing module 106, and determining whether the first light intensity parameter is larger than a first setting light intensity threshold; when the first light intensity parameter is greater than the first set light intensity threshold, step S14 is performed. In this way, by comparing the first light intensity parameter with the first set light intensity threshold, the interference of the light signal with very weak contrast with the reflected light of the medium interface to focusing can be eliminated.
When the first light intensity parameter is not greater than the first set light intensity threshold, the lens 104 is moved to the first object along the optical axis OP by a third set step length.
In the present embodiment, the focusing module 106 includes two light sensors 118, the two light sensors 118 are used for receiving the light reflected by the first object, and the first light intensity parameter is an average value of the light intensities of the light received by the two light sensors 118. In this way, the first light intensity parameter is calculated by the average of the light intensities of the light received by the two light sensors 118, so that it is more accurate to exclude weak light signals.
Specifically, the first light intensity parameter may be set to SUM, i.e., SUM ═ PD1+ PD2)/2, and PD1 and PD2 respectively indicate the light intensities of the light received by the two light sensors 118. In one example, the first set light intensity threshold nSum is 40.
In one example, the third set step size S2 is 0.005 mm. It is understood that, in other examples, the third setting step may also take other values, and is not limited in particular.
In another embodiment, referring to fig. 19, when the focusing module 106 receives the light reflected by the first object, the method further includes the following steps: s16, moving the lens 104 to the first object by a third setting step that is smaller than the first setting step and larger than the second setting step, calculating a first light intensity parameter according to the light intensity of the light received by the focusing module 106, and determining whether the first light intensity parameter is larger than a first setting light intensity threshold; when the first light intensity parameter is greater than the first set light intensity threshold, S17, the lens 104 is moved to the subject by a fourth set step that is less than the third set step and greater than the second set step, and the second light intensity parameter is calculated according to the light intensity of the light received by the focusing module 106, and whether the second light intensity parameter is less than the second set light intensity threshold is determined; when the second light intensity parameter is smaller than the second set light intensity threshold, step S14 is performed. Therefore, by comparing the first light intensity parameter with the first set light intensity threshold value, the interference of light signals with very weak contrast with the reflected light of the medium interface on focusing/focusing can be eliminated; and the strong reflected light signals at the position of the non-medium interface, such as the interference of the light signals reflected by the oil surface/air of the objective lens 110 on focusing/focusing can be eliminated by comparing the second light intensity parameter with the second set light intensity threshold.
When the first light intensity parameter is not greater than the first set light intensity threshold, the lens 104 is moved to the first object along the optical axis OP by a third set step length.
When the second light intensity parameter is not less than the second set light intensity threshold, the lens 104 is moved to the first object along the optical axis OP by a fourth set step.
In one example, the third setting step S2 is 0.005mm, and the fourth setting step S3 is 0.002 mm. It is understood that, in other examples, the third setting step and the fourth setting step may also adopt other values, and are not limited in particular.
In the present embodiment, the focusing module 106 includes two light sensors 118, the two light sensors 118 are used for receiving light reflected by the first object, the first light intensity parameter is an average value of light intensities of the light received by the two light sensors 118, the light intensities of the light received by the two light sensors 118 have a first difference, and the second light intensity parameter is a difference between the first difference and the set compensation value. In this manner, the second light intensity parameter is calculated from the light intensities of the light received by the two light sensors 118, so that the light signal excluding the strong reflection is more accurate.
Specifically, the first light intensity parameter may be set to SUM, i.e., SUM ═ PD1+ PD2)/2, and PD1 and PD2 respectively indicate the light intensities of the light received by the two light sensors 118. In one example, the first set light intensity threshold nSum is 40. The difference may be set to err and the offset may be set to err ═ offset (PD1-PD2) -offset. In an ideal situation, the first difference value may be zero. In one example, the second set light intensity threshold value nrerr is 10 and the offset is 30.
In this embodiment, when the lens 104 is moved by the second set step, it is determined whether the first sharpness value of the pattern corresponding to the current position of the lens 104 is greater than the second sharpness value of the image corresponding to the previous position of the lens 104; when the first sharpness value is larger than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is larger than the set difference, the lens 104 is enabled to continue to move towards the first object by a second set step length; when the first sharpness value is larger than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is smaller than the set difference, the lens 104 is continuously moved to the first object by a fifth set step smaller than the second set step so that the sharpness value of the image acquired by the imaging device 102 reaches the set threshold; moving the lens 104 away from the first object by a second set step size when the second sharpness value is greater than the first sharpness value and the sharpness difference between the second sharpness value and the first sharpness value is greater than the set difference; when the second sharpness value is greater than the first sharpness value and the sharpness difference between the second sharpness value and the first sharpness value is less than the set difference, the lens 104 is moved away from the first object by a fifth set step size so that the sharpness value of the image acquired by the imaging device 102 reaches the set threshold. Therefore, the position of the lens 104 corresponding to the peak of the sharpness value can be accurately found, so that the image output by the imaging device is clear.
Specifically, the second set step may be taken as a coarse step Z1, the fifth set step may be taken as a fine step Z2, and a coarse adjustment range Z3 may be set. The coarse adjustment range Z3 is set so that the movement of the lens 104 can be stopped when the sharpness value of the image does not reach the set threshold, thereby saving resources.
The coarse adjustment range Z3 is an adjustment range with the current position of the lens 104 as the starting point T, i.e., the adjustment range on the Z axis is (T, T + Z3). The lens 104 is first moved in a first direction (e.g., a direction toward the first object along the optical axis OP) by a step Z1 within a range of (T, T + Z3), and a first sharpness value R1 of an image captured by the imaging device 102 at a current position of the lens 104 is compared with a second sharpness value R2 of an image captured by the imaging device 102 at a previous position of the lens 104.
When R1> R2 and R1-R2> R0, i.e., the sharpness values of the illustrative image are closer to and farther from the set threshold, the lens 104 is continued to move in the first direction by step Z1 to quickly approach the set threshold.
When R1> R2 and R1-R2< R0, which illustrate the sharpness values of the image being close to and closer to the set threshold, the lens 104 is moved in the first direction by a step Z2, closer to the set threshold by a smaller step.
When R2> R1 and R2-R1> R0, which illustrate that the sharpness values of the image have crossed the set threshold and are farther from the set threshold, the lens 104 is moved in a second direction opposite to the first direction (e.g., a direction away from the first object along the optical axis OP) by a step Z1 to quickly approach the set threshold.
When R2> R1 and R2-R1< R0, which indicates that the sharpness value of the image has crossed the set threshold and is closer to the set threshold, the lens 104 is moved in a second direction opposite to the first direction by a step Z2, closer to the set threshold by a smaller step.
In this embodiment, the fifth setting step size can be adjusted to adapt to the step size approaching the setting threshold value, which is not too large or too small when the lens 104 moves.
In one example, T is 0, Z1 is 100, Z2 is 40, Z3 is 2100, and the adjustment range is (0,2100). It should be noted that the above values are used as a measure for moving the lens 104 during image acquisition by the imaging device 102, and the measure is related to light intensity. Setting the threshold value may be understood as the peak value of the focus curve or a range centered on the peak value, or a range including the peak value.
Referring to fig. 20, an imaging system 100 according to an embodiment of the present invention is an imaging system for imaging a subject, the imaging system including a lens 104 and a control device 101, the subject including a plurality of first subjects and second subjects, the control device 101 being configured to:
moving the lens 104 and/or the plurality of first objects, and sequentially focusing the plurality of first objects by using an imaging system to obtain a plurality of coordinates, wherein the plurality of coordinates respectively correspond to focal plane positions of the plurality of first objects;
establishing a relationship according to the plurality of coordinates;
with the imaging system, the lens 104 and/or the second object is moved based on the relationship to obtain a sharp image of the second object without focusing.
In the imaging system 100, the relationship is determined by focusing of the plurality of first objects, when a second object is imaged, the focal plane can be directly predicted according to the relationship, the image of the second object is acquired without focusing, and the method is particularly suitable for the situation that the number of the objects is large and the images of the objects are expected to be acquired quickly and continuously.
It should be noted that the explanation and description of the technical features and advantages of the imaging method in any of the above embodiments and examples are also applicable to the imaging system 100 of the present embodiment, and are not detailed here to avoid redundancy.
In some embodiments, the control device 101 is configured to:
to obtain a sharp image of at least one of the plurality of first objects without focusing.
In some embodiments, the relationship is defined by a plurality of first objects, and the second object is located on a trajectory on which the plurality of first objects are located.
In some embodiments, the trajectory in which the plurality of first objects are located includes a first trajectory and a second trajectory.
In some embodiments, the trajectory along which the plurality of first objects are located is linear or non-linear.
In some embodiments, the imaging system 100 includes a lens 104 mount, the lens 104 being rotatably mounted to the lens 104 mount, the lens 104 including an axis of rotation along which the lens 104 rotates.
In some embodiments, the lens 104 is fixed, the lens 104 includes an optical axis, and the first object and/or the second object are moved in a direction perpendicular or parallel to the optical axis.
In some embodiments, the imaging system 100 includes an imaging device 102 and a stage 103, the imaging device 102 includes a lens 104 and a focusing module 106, the lens 104 includes an optical axis, the lens 104 can move along the optical axis to perform focusing, and the first object and the second object are located on the stage 103.
In some embodiments, the control device 101 is configured to perform the following focusing steps:
(a) emitting light onto the first object by using the focusing module 106;
(b) moving the lens 104 to a first set position;
(c) moving the lens 104 from the first setting position to the first object by the first setting step length and determining whether the focusing module 106 receives the light reflected by the first object;
(d) when the focusing module 106 receives the light reflected by the first object, the lens 104 is moved from the current position to a second set position, the second set position is located in a first range, and the first range includes the current position and allows the lens 104 to move in the optical axis direction;
(e) moving the lens 104 from a second set position by a second set step size, at each step position obtaining an image of the first object with the imaging device 102, the second set step size being smaller than the first set step size;
(f) and evaluating the image of the first object, and realizing focusing according to the obtained image evaluation result.
In some embodiments, the first range includes a first interval and a second interval which are opposite to each other with respect to the current position, the second interval is defined to be closer to the first object, and the step (e) includes:
(i) moving the lens 104 from the second set position to a direction away from the first subject when the second set position is in the second section, obtaining an image of the first subject with the imaging device 102 at each step position; or
(ii) When the second set position is in the first zone, the lens 104 is moved from the second set position to a direction close to the first subject, and an image of the first subject is obtained with the imaging device 102 at each step position.
In certain embodiments, step (f) comprises: comparing the image evaluation result with a preset condition, and if the image evaluation result meets the preset condition, saving the position of the lens 104 corresponding to the image;
if the image evaluation result does not satisfy the preset condition, the lens 104 is moved to a third setting position, and the third setting position is located in another section of the first range, which is different from the section where the second setting position is located.
In some embodiments, the image evaluation result includes a first evaluation value and a second evaluation value, the second setting step includes a coarse step and a fine step, and the step (f) includes: the lens 104 is moved in coarse steps until the first evaluation value of the image at the corresponding position is not more than the first threshold value, the lens 104 is changed to a fine step to continue moving until the second evaluation value of the image at the corresponding position is maximized, and the position of the lens 104 corresponding to the image at which the second evaluation value is maximized is saved.
In some embodiments, the image evaluation result includes a first evaluation value, a second evaluation value, and a third evaluation value, and the image includes a plurality of pixels;
the preset condition is that the number of the bright spots on the image is larger than a preset value, the first evaluation value of the image at the corresponding position is not larger than a first threshold value, and the second evaluation value of the image at the corresponding position is the largest among the second evaluation values of the N images before and after the image at the corresponding position; or
The preset condition is that the number of the bright spots on the image is smaller than a preset value, the first evaluation value of the image at the corresponding position is not larger than a first threshold, and the third evaluation value of the image at the corresponding position is the largest among the third evaluation values of the N images before and after the current image.
In some embodiments, the imaging system includes a bright spot detection module to:
and performing bright spot detection on the image by using a k1 x k2 matrix, wherein the matrix which judges that the central pixel value of the matrix is not less than any pixel value of the non-center of the matrix corresponds to a bright spot, both k1 and k2 are odd numbers which are more than 1, and the k1 x k2 matrix comprises k1 x k2 pixel points.
In some embodiments, the central pixel value of the matrix corresponding to a bright spot is greater than a first preset value, any pixel value in the non-central area of the matrix is greater than a second preset value, and the first preset value and the second preset value are related to the average pixel value of the image.
In some embodiments, the first evaluation value is determined by counting sizes of connected components corresponding to the bright spots of the image, where the size of the connected component Area corresponding to the bright spot of one image is a × B, a represents a size of the connected component in a row centered on a center of a matrix corresponding to the bright spot, B represents a size of the connected component in a column centered on the center of the matrix corresponding to the bright spot, and a connected pixel point larger than an average pixel value of the image is defined as one connected component.
In some embodiments, the second evaluation value and/or the third evaluation value is determined by counting scores of the flare of the images, Score of the flare of one image ((k1 × k2-1) CV-EV)/((CV + EV)/(k1 × k2)), CV represents a central pixel value of the matrix corresponding to the flare, and EV represents a sum of non-central pixel values of the matrix corresponding to the flare.
In some embodiments, the focusing module 106 includes a light source 116 and a light sensor 118, the light source 116 is configured to emit light onto the first object, and the light sensor 118 is configured to receive light reflected by the first object.
In some embodiments, when the focusing module 106 receives the light reflected by the first object, the control device 101 is further configured to:
the lens 104 is moved to the first object by a third set step which is smaller than the first set step and larger than the second set step, and a first light intensity parameter is calculated according to the light intensity of the light received by the focusing module 106, and whether the first light intensity parameter is larger than a first set light intensity threshold is judged;
when the first light intensity parameter is greater than the first set light intensity threshold, the lens 104 is moved from the current position to the second set position.
In some embodiments, the focusing module 106 includes two light sensors 118, the two light sensors 118 are configured to receive light reflected by the first object, and the first light intensity parameter is an average value of light intensities of the light received by the two light sensors 118.
In some embodiments, the control device 101 is configured to, when the lens 104 is moved: judging whether the current position of the lens 104 exceeds a fourth set position;
when the current position of the lens 104 exceeds the fourth set position, the movement of the lens 104 is stopped.
In some embodiments, the control device 101 is configured to perform the following focusing steps:
emitting light onto the first object by using the focusing module 106;
moving the lens 104 to a first set position;
moving the lens 104 from the first setting position to the first object by the first setting step length and determining whether the focusing module 106 receives the light reflected by the first object;
when the focusing module 106 receives the light reflected by the first object, the lens 104 is moved by a second set step smaller than the first set step, and the imaging device 102 is used to collect the first object, and whether the sharpness value of the image collected by the imaging device 102 reaches a set threshold value is judged;
when the sharpness value of the image reaches the set threshold value, the current position of the lens 104 is saved as the saving position.
In some embodiments, the focusing module 106 includes a light source 116 and a light sensor 118, the light source 116 is configured to emit light onto the first object, and the light sensor 118 is configured to receive light reflected by the first object.
In some embodiments, when the focusing module 106 receives the light reflected by the first object, the control device 101 is further configured to:
the lens 104 is moved by a first object with a third setting step length which is smaller than the first setting step length and larger than the second setting step length, a first light intensity parameter is calculated according to the light intensity of the light received by the focusing module 106, and whether the first light intensity parameter is larger than a first setting light intensity threshold value is judged;
when the first light intensity parameter is greater than the first set light intensity threshold, the lens 104 is moved by a second set step length, the imaging device 102 is used for collecting the image of the first object, and whether the sharpness value of the image collected by the imaging device 102 reaches the set threshold is judged.
In some embodiments, the focusing module 106 includes two light sensors 118, the two light sensors 118 are configured to receive light reflected by the first object, and the first light intensity parameter is an average of light intensities of the light received by the two light sensors 118.
In some embodiments, when the focusing module 106 receives the light reflected by the first object, the control device 101 is further configured to:
the lens 104 is moved by a first object with a third setting step length which is smaller than the first setting step length and larger than the second setting step length, a first light intensity parameter is calculated according to the light intensity of the light received by the focusing module 106, and whether the first light intensity parameter is larger than a first setting light intensity threshold value is judged;
when the first light intensity parameter is greater than the first set light intensity threshold, the lens 104 moves to the first object by a fourth set step which is smaller than the third set step and larger than the second set step, calculates a second light intensity parameter according to the light intensity of the light received by the focusing module 106, and judges whether the second light intensity parameter is smaller than the second set light intensity threshold;
when the second light intensity parameter is smaller than the second set light intensity threshold, the lens 104 is moved by a second set step length, the imaging device 102 is used for collecting the image of the first object, and whether the sharpness value of the image collected by the imaging device 102 reaches the set threshold is judged.
In some embodiments, the focusing module 106 includes two light sensors 118, the two light sensors 118 are configured to receive light reflected by the first object, the first light intensity parameter is an average value of light intensities of the light received by the two light sensors 118, the light intensities of the light received by the two light sensors 118 have a first difference, and the second light intensity parameter is a difference between the first difference and the set compensation value.
In some embodiments, the control device 101 is configured to: when the lens 104 is moved by a second set step length, determining whether a first sharpness value of a pattern corresponding to a current position of the lens 104 is greater than a second sharpness value of an image corresponding to a previous position of the lens 104;
when the first sharpness value is larger than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is larger than the set difference, the lens 104 is enabled to continue to move towards the first object by a second set step length;
when the first sharpness value is larger than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is smaller than the set difference, the lens 104 is continuously moved to the first object by a fifth set step smaller than the second set step so that the sharpness value of the image acquired by the imaging device 102 reaches the set threshold;
moving the lens 104 away from the first object by a second set step size when the second sharpness value is greater than the first sharpness value and the sharpness difference between the second sharpness value and the first sharpness value is greater than the set difference;
when the second sharpness value is greater than the first sharpness value and the sharpness difference between the second sharpness value and the first sharpness value is less than the set difference, the lens 104 is moved away from the first object by a fifth set step size so that the sharpness value of the image acquired by the imaging device 102 reaches the set threshold.
In some embodiments, the control device 101 is configured to: when the lens 104 moves, judging whether the current position of the lens 104 exceeds a second set position;
when the current position of the lens 104 exceeds the second set position, the moving of the lens 104 is stopped or a focusing step is performed.
A sequencing device according to an embodiment of the present invention includes the imaging system 100 according to any of the above embodiments.
A computer-readable storage medium of an embodiment of the present invention stores a program for execution by a computer, where executing the program includes performing the steps of the method of any of the above embodiments.
An imaging system 100 of an embodiment of the invention for imaging an object, the imaging system comprising a lens 104 and a control device 101, the object comprising a plurality of first objects and second objects, the control device 101 comprising a computer executable program, executing the computer executable program comprising steps to perform the method of any of the above embodiments.
A computer program product of the embodiments of the present invention contains instructions, which when executed by a computer, cause the computer to perform the steps of the method of any of the above embodiments.
In the description herein, references to the description of the terms "one embodiment," "certain embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In addition, each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and those skilled in the art can make changes, modifications, substitutions and alterations to the above embodiments within the scope of the present invention.

Claims (10)

1. An imaging method for imaging an object with an imaging system, the imaging system including a lens, the object including a plurality of first objects and second objects, the imaging method comprising:
moving the lens and/or the plurality of first objects, and sequentially focusing the plurality of first objects by using the imaging system to obtain a plurality of coordinates, wherein the plurality of coordinates respectively correspond to focal plane positions of the plurality of first objects;
establishing a relationship according to the plurality of coordinates;
moving, with the imaging system, the lens and/or the second object based on the relationship to obtain a sharp image of the second object without focus.
2. The method of claim 1, wherein the method comprises:
to obtain a sharp image of at least one of the plurality of first objects without focusing;
optionally, the relationship is defined by the plurality of first objects, the second object being located on a trajectory on which the plurality of first objects are located;
optionally, the tracks in which the plurality of first objects are located include a first track and a second track;
optionally, the trajectory of the first objects is linear or non-linear;
optionally, the imaging system comprises a lens mount, the lens being rotatably mounted on the lens mount, the lens comprising an axis of rotation along which the lens rotates;
optionally, the lens is fixed, the lens comprising an optical axis, the first object and/or the second object being moved in a direction perpendicular or parallel to the optical axis.
3. The method of claim 1, wherein the imaging system comprises an imaging device and a stage, the imaging device comprising the lens and a focusing module, the lens comprising an optical axis, the lens being movable in a direction of the optical axis for the focusing, the first object and the second object being located on the stage;
optionally, the focusing comprises the steps of:
(a) emitting light onto the first object by using the focusing module;
(b) moving the lens to a first set position;
(c) enabling the lens to move to the first object from the first set position by a first set step length and judging whether the focusing module receives light reflected by the first object;
(d) moving the lens from a current position to a second set position when the focusing module receives light reflected by the first object, the second set position being within a first range, the first range being a range including the current position that allows the lens to move in the optical axis direction;
(e) moving the lens from the second setting position by a second setting step length, and obtaining an image of the first object by the imaging device at each step position, wherein the second setting step length is smaller than the first setting step length;
(f) evaluating the image of the first object, and realizing focusing according to the obtained image evaluation result;
optionally, with reference to the current position, the first range includes a first interval and a second interval which are opposite to each other, and the second interval is defined to be closer to the first object, and the step (e) includes:
(i) when the second set position is in the second section, moving the lens from the second set position to a direction away from the first object, and acquiring an image of the first object by the imaging device at each step position; or
(ii) Moving the lens from the second setting position to a direction approaching the first object when the second setting position is in the first section, and obtaining an image of the first object with the imaging device at each step position;
optionally, step (f) comprises: comparing the image evaluation result with a preset condition, and if the image evaluation result meets the preset condition, saving the position of a lens corresponding to the image;
if the image evaluation result does not meet the preset condition, moving the lens to a third set position, wherein the third set position is located in another interval, different from the interval where the second set position is located, in the first range;
optionally, the image evaluation result includes a first evaluation value and a second evaluation value, the second set step includes a coarse step and a fine step, and step (f) includes: moving the lens by the coarse step length until a first evaluation value of the image at the corresponding position is not larger than a first threshold value, changing the lens to be the maximum value by a second evaluation value of the image continuously moved to the corresponding position by the fine step length, and saving the position of the lens corresponding to the image when the second evaluation value is the maximum value;
optionally, the image evaluation result includes a first evaluation value, a second evaluation value, and a third evaluation value, and the image includes a plurality of pixels;
the preset condition is that the number of bright spots on the image is greater than a preset value, the first evaluation value of the image at the corresponding position is not greater than a first threshold value, and the second evaluation value of the image at the corresponding position is the largest among the second evaluation values of the N images before and after the image at the corresponding position; or
The preset condition is that the number of bright spots on the image is smaller than the preset value, the first evaluation value of the image at the corresponding position is not larger than the first threshold, and the third evaluation value of the image at the corresponding position is the largest among the third evaluation values of the N images before and after the current image;
optionally, detecting bright spots on the image using:
performing speckle detection on the image by using a k1 x k2 matrix, wherein the detection comprises that a matrix with a central pixel value not less than any non-central pixel value of the matrix corresponds to a speckle, both k1 and k2 are odd numbers greater than 1, and the k1 x k2 matrix comprises k1 x k2 pixel points;
optionally, a central pixel value of the matrix corresponding to one bright spot is greater than a first preset value, any pixel value of a non-central pixel of the matrix is greater than a second preset value, and the first preset value and the second preset value are related to an average pixel value of the image;
optionally, the first evaluation value is determined by counting sizes of connected components corresponding to bright spots of the images, where Area corresponding to a bright spot of one of the images is a.a.b., where a represents a size of a connected component of a row centered on a center of a matrix corresponding to the bright spot, B represents a size of a connected component of a column centered on the center of the matrix corresponding to the bright spot, and a connected pixel point larger than an average pixel value of the image is defined as a connected component;
optionally, the second evaluation value and/or the third evaluation value is determined by counting scores of the patches of the images, Score of one of the patches of the image ((k1 k2-1) CV-EV)/((CV + EV)/(k1 k2)), CV represents a central pixel value of a matrix corresponding to the patch, and EV represents a sum of non-central pixel values of the matrix corresponding to the patch;
optionally, the focusing module comprises a light source and a light sensor, the light source is used for emitting the light to the first object, and the light sensor is used for receiving the light reflected by the first object;
optionally, when the focusing module receives the light reflected by the first object, the focusing further comprises:
enabling the lens to move towards the first object by a third set step length which is smaller than the first set step length and larger than the second set step length, calculating a first light intensity parameter according to the light intensity of the light received by the focusing module, and judging whether the first light intensity parameter is larger than a first set light intensity threshold value or not;
when the first light intensity parameter is larger than the first set light intensity threshold value, moving the lens from the current position to the second set position;
optionally, the focusing module comprises two light sensors, the two light sensors are used for receiving the light reflected by the first object, and the first light intensity parameter is an average value of light intensities of the light received by the two light sensors;
optionally, when the lens moves, judging whether the current position of the lens exceeds a fourth set position;
and when the current position of the lens exceeds the fourth set position, stopping moving the lens.
4. The method of claim 3, wherein the focusing comprises the steps of:
emitting light onto the first object by using the focusing module;
moving the lens to a first set position;
enabling the lens to move from the first set position to the first object in a first set step length and judging whether the focusing module receives light reflected by the first object;
when the focusing module receives light reflected by the first object, the lens is moved by a second set step length smaller than the first set step length, the imaging device is used for carrying out image acquisition on the first object, and whether the sharpness value of the image acquired by the imaging device reaches a set threshold value or not is judged;
when the sharpness value of the image reaches the set threshold value, saving the current position of the lens as a saving position;
optionally, the focusing module comprises a light source for emitting light onto the first object and a light sensor for receiving light reflected by the first object;
optionally, when the focusing module receives the light reflected by the first object, the focusing further comprises:
enabling the lens to move by a third set step length which is smaller than the first set step length and larger than the second set step length, calculating a first light intensity parameter according to the light intensity of the light received by the focusing module, and judging whether the first light intensity parameter is larger than a first set light intensity threshold value;
when the first light intensity parameter is larger than the first set light intensity threshold, moving the lens by the second set step length, acquiring an image of the first object by using the imaging device, and judging whether the sharpness value of the image acquired by the imaging device reaches the set threshold;
optionally, the focusing module comprises two light sensors for receiving the light reflected by the first object, and the first light intensity parameter is an average value of light intensities of the light received by the two light sensors;
optionally, when the focusing module receives the light reflected by the first object, the focusing further comprises the following steps:
enabling the lens to move by a third set step length which is smaller than the first set step length and larger than the second set step length, calculating a first light intensity parameter according to the light intensity of the light received by the focusing module, and judging whether the first light intensity parameter is larger than a first set light intensity threshold value;
when the first light intensity parameter is larger than the first set light intensity threshold, the lens is made to move to the first object by a fourth set step length which is smaller than the third set step length and larger than the second set step length, a second light intensity parameter is calculated according to the light intensity of the light received by the focusing module, and whether the second light intensity parameter is smaller than the second set light intensity threshold or not is judged;
when the second light intensity parameter is smaller than the second set light intensity threshold, moving the lens by the second set step length, acquiring an image of the first object by using the imaging device, and judging whether the sharpness value of the image acquired by the imaging device reaches the set threshold;
optionally, the focusing module comprises two light sensors, the two light sensors are used for receiving the light reflected by the first object, the first light intensity parameter is an average value of light intensities of the light received by the two light sensors, the light intensities of the light received by the two light sensors have a first difference, and the second light intensity parameter is a difference between the first difference and a set compensation value;
optionally, when the lens is moved by the second set step length, determining whether a first sharpness value of the pattern corresponding to a current position of the lens is greater than a second sharpness value of the image corresponding to a previous position of the lens;
when the first sharpness value is greater than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is greater than a set difference, continuing to move the lens toward the first object at the second set step size;
when the first sharpness value is larger than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is smaller than the set difference, continuously moving the lens toward the first object in a fifth set step smaller than the second set step to make the sharpness value of the image acquired by the imaging device reach the set threshold;
moving the lens away from the first object by the second set step size when the second sharpness value is greater than the first sharpness value and a sharpness difference between the second sharpness value and the first sharpness value is greater than the set difference;
moving the lens away from the first object in the fifth set step to bring the sharpness value of the image acquired by the imaging device to the set threshold when the second sharpness value is greater than the first sharpness value and the sharpness difference between the second sharpness value and the first sharpness value is less than the set difference;
optionally, when the lens moves, judging whether the current position of the lens exceeds a second set position;
and when the current position of the lens exceeds the second set position, stopping moving the lens or carrying out the focusing step.
5. An imaging system for imaging an object, the imaging system comprising a lens and control means, the object comprising a plurality of first objects and second objects, the control means being arranged to:
moving the lens and/or the plurality of first objects, and sequentially focusing the plurality of first objects by using the imaging system to obtain a plurality of coordinates, wherein the plurality of coordinates respectively correspond to focal plane positions of the plurality of first objects;
establishing a relationship according to the plurality of coordinates;
moving, with the imaging system, the lens and/or the second object based on the relationship to obtain a sharp image of the second object without focus;
optionally, the control device is configured to:
to obtain a sharp image of at least one of the plurality of first objects without focusing;
optionally, the relationship is defined by the plurality of first objects, the second object being located on a trajectory on which the plurality of first objects are located;
optionally, the tracks in which the plurality of first objects are located include a first track and a second track;
optionally, the trajectory of the first objects is linear or non-linear;
optionally, the imaging system comprises a lens mount, the lens being rotatably mounted on the lens mount, the lens comprising an axis of rotation along which the lens rotates;
optionally, the lens is fixed, the lens comprising an optical axis, the first object and/or the second object being moved in a direction perpendicular or parallel to the optical axis.
6. The system of claim 5, wherein the imaging system comprises an imaging device and a stage, the imaging device comprising the lens and a focusing module, the lens comprising an optical axis, the lens being movable in a direction of the optical axis for the focusing, the first object and the second object being located on the stage;
optionally, the control device is configured to perform the following focusing steps:
(a) emitting light onto the first object by using the focusing module;
(b) moving the lens to a first set position;
(c) enabling the lens to move to the first object from the first set position by a first set step length and judging whether the focusing module receives light reflected by the first object;
(d) moving the lens from a current position to a second set position when the focusing module receives light reflected by the first object, the second set position being within a first range, the first range being a range including the current position that allows the lens to move in the optical axis direction;
(e) moving the lens from the second setting position by a second setting step length, and obtaining an image of the first object by the imaging device at each step position, wherein the second setting step length is smaller than the first setting step length;
(f) evaluating the image of the first object, and realizing focusing according to the obtained image evaluation result;
optionally, with reference to the current position, the first range includes a first interval and a second interval which are opposite to each other, and the second interval is defined to be closer to the first object, and the step (e) includes:
(i) when the second set position is in the second section, moving the lens from the second set position to a direction away from the first object, and acquiring an image of the first object by the imaging device at each step position; or
(ii) Moving the lens from the second setting position to a direction approaching the first object when the second setting position is in the first section, and obtaining an image of the first object with the imaging device at each step position;
optionally, step (f) comprises: comparing the image evaluation result with a preset condition, and if the image evaluation result meets the preset condition, saving the position of a lens corresponding to the image;
if the image evaluation result does not meet the preset condition, moving the lens to a third set position, wherein the third set position is located in another interval, different from the interval where the second set position is located, in the first range;
optionally, the image evaluation result includes a first evaluation value and a second evaluation value, the second set step includes a coarse step and a fine step, and step (f) includes: moving the lens by the coarse step length until a first evaluation value of the image at the corresponding position is not larger than a first threshold value, changing the lens to be the maximum value by a second evaluation value of the image continuously moved to the corresponding position by the fine step length, and saving the position of the lens corresponding to the image when the second evaluation value is the maximum value;
optionally, the image evaluation result includes a first evaluation value, a second evaluation value, and a third evaluation value, and the image includes a plurality of pixels;
the preset condition is that the number of bright spots on the image is greater than a preset value, the first evaluation value of the image at the corresponding position is not greater than a first threshold value, and the second evaluation value of the image at the corresponding position is the largest among the second evaluation values of the N images before and after the image at the corresponding position; or
The preset condition is that the number of bright spots on the image is smaller than the preset value, the first evaluation value of the image at the corresponding position is not larger than the first threshold, and the third evaluation value of the image at the corresponding position is the largest among the third evaluation values of the N images before and after the current image;
optionally, the imaging system comprises a bright spot detection module, the bright spot detection module is configured to:
performing speckle detection on the image by using a k1 x k2 matrix, wherein the detection comprises that a matrix with a central pixel value not less than any non-central pixel value of the matrix corresponds to a speckle, both k1 and k2 are odd numbers greater than 1, and the k1 x k2 matrix comprises k1 x k2 pixel points;
optionally, a central pixel value of the matrix corresponding to one bright spot is greater than a first preset value, any pixel value of a non-central pixel of the matrix is greater than a second preset value, and the first preset value and the second preset value are related to an average pixel value of the image;
optionally, the first evaluation value is determined by counting sizes of connected components corresponding to bright spots of the images, where Area corresponding to a bright spot of one of the images is a.a.b., where a represents a size of a connected component of a row centered on a center of a matrix corresponding to the bright spot, B represents a size of a connected component of a column centered on the center of the matrix corresponding to the bright spot, and a connected pixel point larger than an average pixel value of the image is defined as a connected component;
optionally, the second evaluation value and/or the third evaluation value is determined by counting scores of the patches of the images, Score of one of the patches of the image ((k1 k2-1) CV-EV)/((CV + EV)/(k1 k2)), CV represents a central pixel value of a matrix corresponding to the patch, and EV represents a sum of non-central pixel values of the matrix corresponding to the patch;
optionally, the focusing module comprises a light source and a light sensor, the light source is used for emitting the light to the first object, and the light sensor is used for receiving the light reflected by the first object;
optionally, when the focusing module receives the light reflected by the first object, the control device is further configured to:
enabling the lens to move towards the first object by a third set step length which is smaller than the first set step length and larger than the second set step length, calculating a first light intensity parameter according to the light intensity of the light received by the focusing module, and judging whether the first light intensity parameter is larger than a first set light intensity threshold value or not;
when the first light intensity parameter is larger than the first set light intensity threshold value, moving the lens from the current position to the second set position;
optionally, the focusing module comprises two light sensors, the two light sensors are used for receiving the light reflected by the first object, and the first light intensity parameter is an average value of light intensities of the light received by the two light sensors;
optionally, when the lens moves, the control device is configured to: judging whether the current position of the lens exceeds a fourth set position;
stopping moving the lens when the current position of the lens exceeds the fourth set position;
optionally, the control device is configured to perform the following focusing steps:
emitting light onto the first object by using the focusing module;
moving the lens to a first set position;
enabling the lens to move from the first set position to the first object in a first set step length and judging whether the focusing module receives light reflected by the first object;
when the focusing module receives light reflected by the first object, the lens is moved by a second set step length smaller than the first set step length, the imaging device is used for carrying out image acquisition on the first object, and whether the sharpness value of the image acquired by the imaging device reaches a set threshold value or not is judged;
when the sharpness value of the image reaches the set threshold value, saving the current position of the lens as a saving position;
optionally, the focusing module comprises a light source for emitting light onto the first object and a light sensor for receiving light reflected by the first object;
optionally, when the focusing module receives light reflected by the first object, the control device is further configured to:
enabling the lens to move by a third set step length which is smaller than the first set step length and larger than the second set step length, calculating a first light intensity parameter according to the light intensity of the light received by the focusing module, and judging whether the first light intensity parameter is larger than a first set light intensity threshold value;
when the first light intensity parameter is larger than the first set light intensity threshold, moving the lens by the second set step length, acquiring an image of the first object by using the imaging device, and judging whether the sharpness value of the image acquired by the imaging device reaches the set threshold;
optionally, the focusing module comprises two light sensors for receiving the light reflected by the first object, and the first light intensity parameter is an average value of light intensities of the light received by the two light sensors;
optionally, when the focusing module receives light reflected by the first object, the control device is further configured to:
enabling the lens to move by a third set step length which is smaller than the first set step length and larger than the second set step length, calculating a first light intensity parameter according to the light intensity of the light received by the focusing module, and judging whether the first light intensity parameter is larger than a first set light intensity threshold value;
when the first light intensity parameter is larger than the first set light intensity threshold, the lens is made to move to the first object by a fourth set step length which is smaller than the third set step length and larger than the second set step length, a second light intensity parameter is calculated according to the light intensity of the light received by the focusing module, and whether the second light intensity parameter is smaller than the second set light intensity threshold or not is judged;
when the second light intensity parameter is smaller than the second set light intensity threshold, moving the lens by the second set step length, acquiring an image of the first object by using the imaging device, and judging whether the sharpness value of the image acquired by the imaging device reaches the set threshold;
optionally, the focusing module comprises two light sensors, the two light sensors are used for receiving the light reflected by the first object, the first light intensity parameter is an average value of light intensities of the light received by the two light sensors, the light intensities of the light received by the two light sensors have a first difference, and the second light intensity parameter is a difference between the first difference and a set compensation value;
optionally, the control device is configured to: when the lens is moved by the second set step length, judging whether a first sharpness value of the pattern corresponding to the current position of the lens is larger than a second sharpness value of the image corresponding to the previous position of the lens;
when the first sharpness value is greater than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is greater than a set difference, continuing to move the lens toward the first object at the second set step size;
when the first sharpness value is larger than the second sharpness value and the sharpness difference between the first sharpness value and the second sharpness value is smaller than the set difference, continuously moving the lens toward the first object in a fifth set step smaller than the second set step to make the sharpness value of the image acquired by the imaging device reach the set threshold;
moving the lens away from the first object by the second set step size when the second sharpness value is greater than the first sharpness value and a sharpness difference between the second sharpness value and the first sharpness value is greater than the set difference;
moving the lens away from the first object in the fifth set step to bring the sharpness value of the image acquired by the imaging device to the set threshold when the second sharpness value is greater than the first sharpness value and the sharpness difference between the second sharpness value and the first sharpness value is less than the set difference;
optionally, the control device is configured to: when the lens moves, judging whether the current position of the lens exceeds a second set position;
and when the current position of the lens exceeds the second set position, stopping moving the lens or carrying out the focusing step.
7. A sequencing apparatus comprising the imaging system of any one of claims 5 to 6.
8. A computer-readable storage medium storing a program for execution by a computer, wherein executing the program comprises performing the steps of the method of any one of claims 1-4.
9. An imaging system for imaging an object, the imaging system comprising a lens and a control device, the object comprising a plurality of first objects and second objects, characterized in that the control device comprises a computer-executable program, execution of which comprises performing the steps of the method of any one of claims 1-4.
10. A computer program product comprising instructions, characterized in that, when said instructions are executed by a computer, said instructions cause said computer to carry out the steps of the method according to any one of claims 1 to 4.
CN201810813660.XA 2018-07-23 2018-07-23 Imaging method, device and system Pending CN112291469A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201810813660.XA CN112291469A (en) 2018-07-23 2018-07-23 Imaging method, device and system
PCT/CN2019/097272 WO2020020148A1 (en) 2018-07-23 2019-07-23 Imaging method, device and system
EP19841635.6A EP3829158A4 (en) 2018-07-23 2019-07-23 Imaging method, device and system
US17/262,663 US11368614B2 (en) 2018-07-23 2019-07-23 Imaging method, device and system
US17/746,838 US11575823B2 (en) 2018-07-23 2022-05-17 Imaging method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810813660.XA CN112291469A (en) 2018-07-23 2018-07-23 Imaging method, device and system

Publications (1)

Publication Number Publication Date
CN112291469A true CN112291469A (en) 2021-01-29

Family

ID=74418966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810813660.XA Pending CN112291469A (en) 2018-07-23 2018-07-23 Imaging method, device and system

Country Status (1)

Country Link
CN (1) CN112291469A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115047610A (en) * 2022-08-17 2022-09-13 杭州德适生物科技有限公司 Chromosome karyotype analysis device and method for automatically fitting microscopic focusing plane

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6674574B1 (en) * 1999-09-24 2004-01-06 Olympus Corporation Focusing system for a microscope and a reflected illumination fluorescence microscope using the focusing system
US20040217257A1 (en) * 2003-05-01 2004-11-04 Eastman Kodak Company Scene-based method for determining focus
TW200804962A (en) * 2006-07-06 2008-01-16 Asia Optical Co Inc Semi-automatic zooming method for a manually-zooming optical component and a videotaping device using such method,
CN101303269A (en) * 2007-05-09 2008-11-12 奥林巴斯株式会社 Optical system evaluation apparatus, optical system evaluation method and program thereof
TW200934226A (en) * 2008-01-25 2009-08-01 Hon Hai Prec Ind Co Ltd Image capturing device and auto-focus method
US20120281132A1 (en) * 2010-11-08 2012-11-08 Yasunobu Ogura Image capturing device, image capturing method, program, and integrated circuit
CN102967983A (en) * 2012-11-07 2013-03-13 苏州科达科技股份有限公司 Automatic focusing method of camera
CN103513395A (en) * 2012-06-15 2014-01-15 中兴通讯股份有限公司 Passive auto-focusing method and device
US20140300724A1 (en) * 2000-05-03 2014-10-09 Leica Biosystems Imaging, Inc. Optimizing virtual slide image quality
CN104102069A (en) * 2013-04-11 2014-10-15 展讯通信(上海)有限公司 Focusing method and device of imaging system, and imaging system
CN104243815A (en) * 2014-08-25 2014-12-24 联想(北京)有限公司 Focusing method and electronic equipment
CN104503189A (en) * 2014-12-31 2015-04-08 信利光电股份有限公司 Automatic focusing method
CN104503188A (en) * 2014-12-31 2015-04-08 信利光电股份有限公司 Automatic focusing module and mobile equipment
CN105827944A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Focusing method and mobile terminal
TW201638622A (en) * 2015-04-28 2016-11-01 信泰光學(深圳)有限公司 Automatic focusing method and image capturing device utilizing the same
CN106257914A (en) * 2015-06-19 2016-12-28 奥林巴斯株式会社 Focus detection device and focus detecting method
CN106375647A (en) * 2015-07-23 2017-02-01 杭州海康威视数字技术股份有限公司 Method, device and system for adjusting camera back focus
CN106791387A (en) * 2016-12-12 2017-05-31 中国航空工业集团公司洛阳电光设备研究所 A kind of high-definition camera Atomatic focusing method that gondola is patrolled and examined for power network
CN207215686U (en) * 2017-09-20 2018-04-10 深圳市瀚海基因生物科技有限公司 Systems for optical inspection and Sequence Detection System
CN108024053A (en) * 2016-10-28 2018-05-11 奥林巴斯株式会社 Camera device, focus adjusting method and recording medium
CN108076268A (en) * 2016-11-15 2018-05-25 谷歌有限责任公司 The equipment, system and method for auto-focusing ability are provided based on object distance information

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6674574B1 (en) * 1999-09-24 2004-01-06 Olympus Corporation Focusing system for a microscope and a reflected illumination fluorescence microscope using the focusing system
US20140300724A1 (en) * 2000-05-03 2014-10-09 Leica Biosystems Imaging, Inc. Optimizing virtual slide image quality
US20040217257A1 (en) * 2003-05-01 2004-11-04 Eastman Kodak Company Scene-based method for determining focus
TW200804962A (en) * 2006-07-06 2008-01-16 Asia Optical Co Inc Semi-automatic zooming method for a manually-zooming optical component and a videotaping device using such method,
CN101303269A (en) * 2007-05-09 2008-11-12 奥林巴斯株式会社 Optical system evaluation apparatus, optical system evaluation method and program thereof
TW200934226A (en) * 2008-01-25 2009-08-01 Hon Hai Prec Ind Co Ltd Image capturing device and auto-focus method
US20120281132A1 (en) * 2010-11-08 2012-11-08 Yasunobu Ogura Image capturing device, image capturing method, program, and integrated circuit
CN103513395A (en) * 2012-06-15 2014-01-15 中兴通讯股份有限公司 Passive auto-focusing method and device
CN102967983A (en) * 2012-11-07 2013-03-13 苏州科达科技股份有限公司 Automatic focusing method of camera
CN104102069A (en) * 2013-04-11 2014-10-15 展讯通信(上海)有限公司 Focusing method and device of imaging system, and imaging system
CN104243815A (en) * 2014-08-25 2014-12-24 联想(北京)有限公司 Focusing method and electronic equipment
CN104503189A (en) * 2014-12-31 2015-04-08 信利光电股份有限公司 Automatic focusing method
CN104503188A (en) * 2014-12-31 2015-04-08 信利光电股份有限公司 Automatic focusing module and mobile equipment
TW201638622A (en) * 2015-04-28 2016-11-01 信泰光學(深圳)有限公司 Automatic focusing method and image capturing device utilizing the same
CN106257914A (en) * 2015-06-19 2016-12-28 奥林巴斯株式会社 Focus detection device and focus detecting method
CN106375647A (en) * 2015-07-23 2017-02-01 杭州海康威视数字技术股份有限公司 Method, device and system for adjusting camera back focus
CN105827944A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Focusing method and mobile terminal
CN108024053A (en) * 2016-10-28 2018-05-11 奥林巴斯株式会社 Camera device, focus adjusting method and recording medium
CN108076268A (en) * 2016-11-15 2018-05-25 谷歌有限责任公司 The equipment, system and method for auto-focusing ability are provided based on object distance information
CN106791387A (en) * 2016-12-12 2017-05-31 中国航空工业集团公司洛阳电光设备研究所 A kind of high-definition camera Atomatic focusing method that gondola is patrolled and examined for power network
CN207215686U (en) * 2017-09-20 2018-04-10 深圳市瀚海基因生物科技有限公司 Systems for optical inspection and Sequence Detection System

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115047610A (en) * 2022-08-17 2022-09-13 杭州德适生物科技有限公司 Chromosome karyotype analysis device and method for automatically fitting microscopic focusing plane

Similar Documents

Publication Publication Date Title
US11156823B2 (en) Digital microscope apparatus, method of searching for in-focus position thereof, and program
US11086118B2 (en) Self-calibrating and directional focusing systems and methods for infinity corrected microscopes
US11575823B2 (en) Imaging method, device and system
CN108693625B (en) Imaging method, device and system
CN106019550B (en) Dynamic focusing device and focusing tracking for the micro- scanning of high speed
CN113467067B (en) Automatic focusing method and device of microscopic imaging system based on multi-image area relation
CN102122055A (en) Laser-type automatic focusing device and focusing method thereof
US9851549B2 (en) Rapid autofocus method for stereo microscope
CN113219622A (en) Objective lens focusing method, device and system for panel defect detection
CN113115027A (en) Method and system for calibrating camera
CN108693624B (en) Imaging method, device and system
CN112322713B (en) Imaging method, device and system and storage medium
WO2018188440A1 (en) Imaging method, device and system
CN112291469A (en) Imaging method, device and system
CN112333378A (en) Imaging method, device and system
JP5471715B2 (en) Focusing device, focusing method, focusing program, and microscope
JP2009109682A (en) Automatic focus adjusting device and automatic focus adjusting method
KR102010818B1 (en) Apparatus for capturing images of blood cell
JP6312410B2 (en) Alignment apparatus, microscope system, alignment method, and alignment program
CN108693113B (en) Imaging method, device and system
JP5960006B2 (en) Sample analyzer, sample analysis method, sample analysis program, and particle track analyzer
JP2013174709A (en) Microscope device and virtual microscope device
US20230258919A1 (en) A method for analysing scanning efficacy
EP4025949A1 (en) Microscope and method for imaging an object using a microscope
CN114040194A (en) Method and device for testing dirt of camera module and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035902

Country of ref document: HK

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210129