WO2021130773A1 - Method and system for enabling whole slide imaging - Google Patents

Method and system for enabling whole slide imaging Download PDF

Info

Publication number
WO2021130773A1
WO2021130773A1 PCT/IN2020/051046 IN2020051046W WO2021130773A1 WO 2021130773 A1 WO2021130773 A1 WO 2021130773A1 IN 2020051046 W IN2020051046 W IN 2020051046W WO 2021130773 A1 WO2021130773 A1 WO 2021130773A1
Authority
WO
WIPO (PCT)
Prior art keywords
objective
slide
images
moving
image
Prior art date
Application number
PCT/IN2020/051046
Other languages
French (fr)
Inventor
Adarsh Natarajan
Harinarayanan KURUTHIKADAVATH KURUSSITHODI
Abhay Kumar
Mukesh MALVIA
Shivananda G MUDIYAPPANAVARA
Original Assignee
Adarsh Natarajan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Adarsh Natarajan filed Critical Adarsh Natarajan
Publication of WO2021130773A1 publication Critical patent/WO2021130773A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • G02B21/244Devices for focusing using image analysis techniques
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison

Definitions

  • the present disclosure relates, in general, to imaging systems and more particularly, but not exclusively to a method and system for enabling whole slide imaging.
  • Embodiments of the present disclosure relate to a method for enabling whole slide imaging of a sample.
  • the method comprises placing an objective at a first distance from the sample located in a slide and capturing a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample.
  • the method also comprises determining a focused image from the plurality of images and estimating an optimum position of the objective corresponding to a position in which the focused image is captured.
  • the method further comprises dynamically computing a next position for moving the objective based on the optimum position and a displacement required from the optimum position.
  • the method also comprises capturing a plurality of images of another portion of the slide when moving the objective to the next position if another portion of the slide is determined as not the end of the slide and repeating the above steps till another portion of the slide is determined to be the end of the slide.
  • Another aspect of the present disclosure relates to a system for enabling whole slide imaging of a sample, the system comprising an objective configured to be placed at a first distance from the sample located in a slide.
  • the system also comprises a camera sensor configured to capture a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample.
  • the system further comprises a processor, coupled to the objective and the camera sensor, and is configured to determine a focused image from the plurality of images based on the measured focus of each image of the plurality of images thus captured, and estimate an optimum position of the objective corresponding to the position in which the focused image is captured.
  • the processor is also configured to dynamically compute a next position for moving the objective based on the optimum position and a displacement required from the optimum position.
  • the processor is further configured to capture a plurality of images of another portion of the slide when moving the objective to the next position if another portion of the slide is determined as not the end of the slide; and repeat the above steps till another portion of the slide is determined to be the end of the slide. Movement from one fov to another fov for imaging full slide can be done in any manner that includes zigzag motion, raster scanning etc.
  • Figure 1 illustrates an exemplary architecture of a system for enabling whole slide imaging of a slide in accordance with embodiments of the present disclosure
  • Figure 2 illustrates a block diagram of the whole slide imaging system of Figure 1 in accordance with embodiments of the present disclosure
  • Figure 3 illustrates exemplary steps of a method performed by the whole slide imaging system in accordance with embodiments of the present disclosure
  • Figure 4 illustrates a perspective view of an objective of the whole slide imaging system moving in a range of motion initially along the z-axis and along x-axis in accordance with embodiments of the present disclosure
  • Figure 5 illustrates a perspective view of the objective moving in the range of motion along the z- axis and x-axis simultaneously in accordance with embodiments of the present disclosure.
  • Figure 1 illustrates an exemplary architecture of a system for enabling whole slide imaging of a slide in accordance with embodiments of the present disclosure.
  • the exemplary architecture of the system 100 comprises a plurality of components such as a whole slide imaging system 101, a user device 102, and data repository 103.
  • the whole slide imaging system 101, the user device 102, and the data repository 103 are communicatively coupled via network 104.
  • the network 104 can be a LAN (local area network), WAN (wide area network), wireless network, point-to-point network, or another configuration.
  • TCP/IP Transfer Control Protocol and Internet Protocol
  • Other common Internet protocols used for such communication include HTTPS, FTP, AFS, and WAP and using secure communication protocols etc.
  • the whole slide imaging system 101 comprises a camera sensor 106, an objective 108, a slide stage 110, a processor 114, and an image analysis module 116.
  • the objective 108 is initially placed at a first distance from a sample located in a slide placed on the slide stage 110 and is moved from the first distance to a second distance from the sample.
  • the camera sensor 106 is configured to capture a plurality of images of a portion of the slide having the sample placed in the slide stage 110 when the objective 108 is moving.
  • the processor 114 stores the plurality of captured images in at least one of the data repository 103 and internal memory (not shown) of the whole slide imaging system 101.
  • the image analysis module 116 is configured to determine a focused image from the captured plurality of images and the processor 114 is configured to determine an optimum position of the objective 108 corresponding to position at which the focused image is captured.
  • the processor 114 is also configured to dynamically compute a next position for moving the objective 108 based on the optimum position and a displacement.
  • the processor 114 enables the movement of the objective 108 to the next position and controls the camera sensor 106 to capture another plurality of images repeating the process of determining focussed image, and moving to yet another next position till the end of the slide.
  • the data repository 103 stores one or more plurality of images for each portion of the slide, one or more focused images, and an optimum position data corresponding to each of the one or more focused images determined during every repeating process.
  • the whole slide imaging system 101 may be operated based on instructions received from the user device 102 via the network 104.
  • the user device 102 may be a mobile device or a computing device including the functionality for communicating over the network 104.
  • the mobile device can be a conventional web-enabled personal computer in the home, mobile computer (laptop, notebook or subnotebook), Smart Phone (iOS, Android), personal digital assistant, wireless electronic mail device, tablet computer or other device capable of communicating both ways over the Internet or other appropriate communications network.
  • the user device 102 may comprise an integrated software application with a user interface that enables interaction with the whole slide imaging system 101.
  • Figure 2 illustrates a block diagram of the whole slide imaging system of Figure 1 in accordance with embodiments of the present disclosure.
  • the whole slide imaging system (hereinafter referred to as system) 101 comprises the camera sensor 106, the objective 108, the slide stage 110, the processor 114, the image analysis module 116, a control unit 202, an X-axis motor 204, a Y-axis motor 206, a Z-axis motor 208, a movement detection and computation module 218, a user interface 220, and a memory 221.
  • the processor 114 is coupled to the control unit 202 to control the camera sensor 106 and the objective 108.
  • the control unit 202 comprises a camera control module 222 and a motor control module 224.
  • the slide stage 110 may be configured to hold the slide having the sample and the objective 108 is placed at a first distance from the sample located in the slide.
  • the sample on the slide shall be of non-uniform thickness.
  • the objective 108 moves in a range of motion along Z-axis to a second distance such that focus lies within the range of motion.
  • the motor control module 224 is configured to enable at least one of X-axis motor 204 and Z-axis motor 208 to move the objective 108 to the second distance along z-axis in a constant range of motion.
  • the motor control module 224 initially enables the Z-axis motor 208 for moving the objective 108 along z- axis and further enables the X-axis motor 204 for moving the objective 108 along the x-axis.
  • the camera control module 222 enables the camera sensor 106 coupled to the objective 108 to capture a plurality of images of at least a portion of the slide when moving the objective 108 in constant range along the Z-axis such that the focus lies in the range of motion. This is because, to image a field of view (FOV), the sample of the FOV must lie at the focal point of the objective 108 within a tolerance named Depth of Field (DOF) of objective 108. Therefore, the distance between the objective 108 and a focal plane of the sample should be constant throughout the slide.
  • FOV field of view
  • DOE Depth of Field
  • the objective 108 needs to be moved along the Z-axis such that the sample is always in focus.
  • the slide stage 110 moves in at least one of x-axis, y-axis, and z-axis when the objective 108 is moving in at least one of y-axis, z-axis, and x-axis, thereby creating relative motion between the slide stage 110 and the objective 108.
  • the motor control module 224 enables both Z-axis motor 208 and X-axis motor 204 for moving the objective 108 simultaneously in both x and z directions and the camera control module 222 enables the camera sensor 106 coupled to the objective 108 to capture a plurality of images of at least a portion of the slide when moving the objective 108 simultaneously in both x and z directions.
  • the processor 114 stores the plurality of captured images of at least a portion of the slide in the memory 250.
  • the processor 114 is configured to determine a speed of the z-axis motor (Sz) based on at least one of maximum frame rate of the camera sensor and a depth of field of the objective.
  • the processor 114 is also configured to determine a speed of the x-axis motor (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor 106, and an overlapping portion between two successive images, and enables the simultaneous movement of the objective 108 along both x and z directions.
  • Sx x-axis motor
  • Sz z-axis
  • the processor 114 is also configured to determine a speed of the x-axis motor (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor 106, and an overlapping portion between two successive images, and enables the simultaneous movement of the objective 108 along both x and z directions.
  • the speed of the Z-axis motor 208 is given by following eq. (1)
  • FPS is the maximum frame rate in terms of frames per second of the camera.
  • DOF is the depth of field of the system.
  • the speed of the X-axis motor 204 is given by following eq. (2) Sx ⁇ (Sz *Sn/M)*(2 - P)/(2*DZmax) (2) wherein, the Sn be the sensor size of the camera;
  • M be the optical magnification
  • P is an overlapping portion of the two images; and DZmax is a maximum Z travel in the system.
  • the plurality of images captured during the movement of the objective 105 may be stored within the memory 250 or in the data repository 103.
  • the images can be in any format such as, but not limited to, bitmap picture (BMP), joint photographic experts’ group (JPEG), portable network graphics (PNG), or tagged image file format (TIFF).
  • BMP bitmap picture
  • JPEG joint photographic experts’ group
  • PNG portable network graphics
  • TIFF tagged image file format
  • the user interface 220 enables a user of the whole slide imaging system 101 to interact with the whole slide imaging system 101 for capturing images of the entire slide.
  • the user interface 220 may be a graphical user interface (GUI) or buttons or a touch interface or any other similar interface that enables the user to interact with the whole slide imaging system 101.
  • GUI graphical user interface
  • the image analysis module 116 is configured to retrieve a plurality of images from the memory 250 and determine a focused image from the plurality of images of at least a portion of the slide.
  • the focused image from the plurality of images is determined by processing the plurality of images to measure focus of each image, and determining the focused image based on the measured focus of each image.
  • the movement detection and computation module 118 is configured to estimate an optimum position corresponding to position in which the focused images is captured.
  • the movement detection and computation module 118 is also configured to compute a next position for moving the objective based on the optimum position and a displacement required from the optimum position. In one embodiment, the movement detection and computation module 118 is configured to determine the displacement required for moving the objective 108 from the optimum position to the next position. The movement detection and computation module 118 determines the displacement based on at least one of depth of field (df), a first correction factor to adjust non-uniformity of a sample placed in the slide, and a second correction factor to adjust tilt of the slide.
  • df depth of field
  • the second correction factor is determined based on an angle of tilt for the slide and a size of field of view (dfov), wherein the dfov is based on size and magnification of a camera sensor 106 coupled with the objective, and a field of view of the objective 108.
  • the movement detection and computation module 118 computes the next position for moving the objective 108.
  • the movement detection and computation module 118 and the image analysis module 116 iterate the steps of estimating the optimum position, computing the next position, and capturing the plurality of images of at least another portion of the slide till another portion of the slide is determined to be the end of the slide.
  • the motor control module 224 enables the Y -axis motor 206 by a particular distance in an iterated manner to capture images of the entire slide.
  • Figure 3 illustrates exemplary steps of a method performed by the whole slide imaging system in accordance with embodiments of the present disclosure.
  • the method 300 comprises one or more blocks implemented by the processor 114 for enabling the objective 108 to capture the images of a slide.
  • the method 300 may be described in the general context of computer executable instructions.
  • computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
  • the order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
  • the slide having the sample is placed in the slide stage 110 and the objective 108 is placed at a first distance from the sample located in the slide.
  • a plurality of images of at least a portion of the slide is captured when moving the objective 108 to a second distance from the sample.
  • the motor control module 224 is configured to enable at least one of X-axis motor 204 and Z-axis motor 208 to move the objective 108 to the second distance along z-axis in a constant range of motion.
  • the motor control module 224 initially enables the z-axis motor 208 for moving the objective along z-axis and further enables the x-axis motor 204 for moving the objective 108 along the x-axis.
  • the camera control module 222 enables the camera sensor 106 to capture the plurality of images of at least a portion of the slide when moving the objective 108 in constant range along the z-axis such that the focus lies in the range of motion.
  • the motor control module 224 enables both Z-axis motor 208 and X-axis motor 204 simultaneously in both x and z directions for moving the objective 108.
  • the camera control module 222 enables the camera sensor 106 coupled to the objective 108 to capture a plurality of images of at least a portion of the slide when moving the objective 108 simultaneously in both x and z directions.
  • the processor 114 stores the plurality of images of at least a portion of the slide in the memory 250 coupled with the camera sensor 106.
  • the processor 114 is configured to determine a speed of the z-axis motor (Sz) based on at least one of maximum frame rate of the camera sensor 106 and a depth of field of the objective 108.
  • the processor 114 is also configured to determine a speed of the x-axis motor (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor 106, and an overlapping portion between two successive images, and enables the simultaneous movement of the objective 108 along both x and z directions.
  • Sx x-axis motor
  • Sz z-axis
  • the processor 114 is also configured to determine a speed of the x-axis motor (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor 106, and an overlapping portion between two successive images, and enables the simultaneous movement of the objective 108 along both x and z directions.
  • a focused image and an optimum position of the objective having a focused image is determined.
  • an image analysis module 116 is configured to retrieve a plurality of images from the memory 250 and determine the focused image from the plurality of images of at least a portion of the slide.
  • the focused image from the plurality of images is determined by processing the plurality of images to measure focus of each image and determining the focused image based on the measured focus of each image of the plurality of images.
  • processing of the plurality of images includes applying a filter such as sobel filter.
  • the movement detection and computation module 118 is configured to estimate the optimum position of the objective 108 corresponding to the position in which the focused image is captured.
  • the movement detection and computation module 118 is configured to determine the optimum position corresponding to the focused image based on an index of the focused image, an index of a last image in the plurality of images, and a distance moved by the objective 108 when capturing the plurality of images of at least a portion of the slide.
  • a next position for moving the objective 108 is computed based on the optimum position and a displacement required.
  • the movement detection and computation module 118 is configured to determine a displacement of the objective 108 from the optimum position based on at least one of depth of field (df), a first correction factor to adjust non-uniformity of a sample placed in the slide, and a second correction factor to adjust tilt of the slide.
  • the second correction factor is determined based on an angle of tilt for the slide and a size of field of view (dfov), wherein the dfov is based on size and magnification of the camera sensor 106 coupled with the objective 108, and a field of view of the objective 108.
  • a motor control module 224 enables at least one of X-axis motor 204 and Z-axis motor 208 to move the objective to the next position in an opposite direction.
  • the motor control module 224 initially enables the Z-axis motor 208 for moving the objective along z-axis and the camera control module 222 enables the camera sensor 106 coupled to the objective to capture a plurality of images of at least a portion of the slide when moving the objective along the z-axis motion. Later, the motor control module 224 enables the x-axis motor 204 for moving the objective 108 along the x-axis.
  • the motor control module 224 enables both Z-axis motor 208 and X-axis motor 204 for moving the objective 108 simultaneously in both x and z directions and the camera control module 222 enables the camera coupled to the objective 108 to capture a plurality of images of at least a portion of the slide when moving the objective 108 simultaneously in both x and z directions.
  • a last focused image is determined.
  • the image analysis module 116 is configured to determine a last focused image when the determination is made that another portion of the slide is the end of the slide.
  • the whole slide imaging system 101 enables dynamic movement of the objective, thereby reducing the range of motion to determine a focused image in each movement for enabling whole slide imaging of a sample.
  • the disclosed method also reduces the time required by the objective 108 to capture images of a slide, thereby reducing the processing power, memory requirement, and power consumption.
  • a computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored.
  • a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein.
  • the term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., are non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
  • Figure 4 illustrates a perspective view of the objective moving in the range of motion initially along the z-axis and along x-axis in accordance with embodiments of the present disclosure.
  • the objective 108 is initially placed at a first distance from a sample 404 contained in a slide 402.
  • the camera sensor 106 is configured capture a plurality of images of a portion of the slide 402 when moving the objective 108 from the first distance to a second distance from the sample 404 .
  • the processor 114 is configured to determine a focused image from the plurality of images based on measured focus of each image of the plurality of images thus captured.
  • the processor 114 is also configured to estimate an optimum position of the objective 108 corresponding to position at which the focused image is captured.
  • the processor 114 is also configured to compute a next position for moving the objective 108 based on the optimum position and a displacement required from the optimum position.
  • the processor 114 further enables the camera sensor to capture a plurality of images of another portion of the slide when moving the objective 108 to the next position, wherein the objective 108 initially moves along the z-axis direction and along x-axis direction to reach the next position.
  • the processor 114 is further configured to determine whether another portion of the slide is an end of the slide. If another portion of the slide is not the end of the slide, the steps as repeated as shown in Figure 4 until another portion of the slide is determined to be the end of the slide.
  • Figure 5 illustrates a perspective view of the objective moving in the range of motion along the z- axis and x-axis simultaneously in accordance with embodiments of the present disclosure.
  • the objective 108 is initially placed at a first distance from a sample 404 contained in the slide 402.
  • the camera sensor 106 is configured to capture a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample.
  • the processor 114 is configured to determine a focused image from the plurality of images based on the measured focus of each image of the plurality of images thus captured.
  • the processor 114 is also configured to estimate an optimum position of the objective 108 corresponding to position in which the focused image is captured.
  • the processor 114 is also configured to compute a next position for moving the objective 108 based on the optimum position and a displacement required from the optimum position.
  • the processor 114 further enables the camera sensor to capture a plurality of images of another portion of the slide when moving the objective 108 to the next position, wherein the objective 108 moves simultaneously along the z- axis direction and x-axis direction to reach the next position.
  • the processor 114 is further configured to determine whether another portion of the slide is an end of the slide.
  • the processor 114 is configured to determine a speed of the z-axis motor (Sz) for moving to the next position based on at least one of maximum frame rate of the camera sensor 106 and a depth of field of the objective 108.
  • the processor 114 is also configured to determine a speed of the x-axis motor (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor 106, and an overlapping portion between two successive images. If another portion of the slide is not the end of the slide, the steps are repeated as shown in Figure 5 until another portion of the slide is determined to be the end of the slide.
  • the continuous movement of the objective 108 simultaneously along X-axis and Z-axis will eliminate errors caused due to the start-stop-start-stop mechanism and also reduces the time taken to capture a whole slide.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

Disclosed herein is a method and system for enabling whole slide imaging of a sample. The method comprises initializing an objective at a first distance from the sample located in a slide and capturing a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample. The method also comprises determining a focused image from the plurality of images, estimating an optimum position of the objective corresponding to position in which the focused image is captured, and dynamically computing a next position for moving the objective based on the optimum position and a displacement required from the optimum position. The method further comprises capturing a plurality of images of another portion of the slide when moving the objective to the next position if another portion of the slide is determined as not the end of the slide and repeating the above steps till another portion of the slide is determined to be the end of the slide.

Description

Title: “METHOD AND SYSTEM FOR ENABLING WHOLE SLIDE IMAGING”
CROSS REFERENCE TO RELATED APPLICATION
This application claims the benefit of priority to Indian Provisional Patent Application Number 201941050779, filed on December 24, 2019, the entire contents of which are hereby incorporated by reference.
TECHNICAL FIELD
The present disclosure relates, in general, to imaging systems and more particularly, but not exclusively to a method and system for enabling whole slide imaging.
BACKGROUND
Whole slide imaging technologies have existed for more than two decades. Though many different technologies have evolved over the years, there are some drawbacks that need to be addressed. Few of the currently existing technologies compute a focal plane for a slide and subsequently image the slide by moving the objective. However, this method only works if the focus of the slide is planar and positioning systems are precise enough to accurately position the objective. Some other existing systems compute the focus for each field of view (FOV) and adjust the relative position of objective-to-sample manually so that the sample is at the focal plane of the objective for each Field Of View (FOV) and readjust this for each FOV. Furthermore, few other existing systems acquire FOV after FOV by moving the objective in the Z-axis within a range of motion at each FOV and then saving the best focused image. However, these systems require the range to include the prospective focal planes for all FOVs in the slide, thereby resulting in larger movement along the Z-axis. Moreover, when using inaccurate positioning systems, the errors will accumulate and will cause the movement of objective to be out of the focus range and thereby failing to acquire focused images.
Thus, there is a need for a method and a system that is capable of imaging a whole slide when the positioning system for placing the slide and objective is not precise and the sample is non-uniform. SUMMARY
Embodiments of the present disclosure relate to a method for enabling whole slide imaging of a sample. The method comprises placing an objective at a first distance from the sample located in a slide and capturing a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample. The method also comprises determining a focused image from the plurality of images and estimating an optimum position of the objective corresponding to a position in which the focused image is captured. The method further comprises dynamically computing a next position for moving the objective based on the optimum position and a displacement required from the optimum position. The method also comprises capturing a plurality of images of another portion of the slide when moving the objective to the next position if another portion of the slide is determined as not the end of the slide and repeating the above steps till another portion of the slide is determined to be the end of the slide.
Another aspect of the present disclosure relates to a system for enabling whole slide imaging of a sample, the system comprising an objective configured to be placed at a first distance from the sample located in a slide. The system also comprises a camera sensor configured to capture a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample. The system further comprises a processor, coupled to the objective and the camera sensor, and is configured to determine a focused image from the plurality of images based on the measured focus of each image of the plurality of images thus captured, and estimate an optimum position of the objective corresponding to the position in which the focused image is captured. The processor is also configured to dynamically compute a next position for moving the objective based on the optimum position and a displacement required from the optimum position. The processor is further configured to capture a plurality of images of another portion of the slide when moving the objective to the next position if another portion of the slide is determined as not the end of the slide; and repeat the above steps till another portion of the slide is determined to be the end of the slide. Movement from one fov to another fov for imaging full slide can be done in any manner that includes zigzag motion, raster scanning etc. The system, and associated method of the present disclosure overcome one or more of the shortcomings of the prior art. Additional features and advantages may be realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the leftmost digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of device or system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
Figure 1 illustrates an exemplary architecture of a system for enabling whole slide imaging of a slide in accordance with embodiments of the present disclosure;
Figure 2 illustrates a block diagram of the whole slide imaging system of Figure 1 in accordance with embodiments of the present disclosure;
Figure 3 illustrates exemplary steps of a method performed by the whole slide imaging system in accordance with embodiments of the present disclosure;
Figure 4 illustrates a perspective view of an objective of the whole slide imaging system moving in a range of motion initially along the z-axis and along x-axis in accordance with embodiments of the present disclosure; and Figure 5 illustrates a perspective view of the objective moving in the range of motion along the z- axis and x-axis simultaneously in accordance with embodiments of the present disclosure.
The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION
In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and the scope of the disclosure.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a device or system or apparatus proceeded by “comprises... a” does not, without more constraints, preclude the existence of other elements or additional elements in the device or system or apparatus.
Figure 1 illustrates an exemplary architecture of a system for enabling whole slide imaging of a slide in accordance with embodiments of the present disclosure. As illustrated in Figure 1, the exemplary architecture of the system 100 comprises a plurality of components such as a whole slide imaging system 101, a user device 102, and data repository 103. The whole slide imaging system 101, the user device 102, and the data repository 103 are communicatively coupled via network 104. The network 104 can be a LAN (local area network), WAN (wide area network), wireless network, point-to-point network, or another configuration. One of the most common types of network in current use is a TCP/IP (Transfer Control Protocol and Internet Protocol) network for communication between database client and database server. Other common Internet protocols used for such communication include HTTPS, FTP, AFS, and WAP and using secure communication protocols etc.
The whole slide imaging system 101 comprises a camera sensor 106, an objective 108, a slide stage 110, a processor 114, and an image analysis module 116. The objective 108 is initially placed at a first distance from a sample located in a slide placed on the slide stage 110 and is moved from the first distance to a second distance from the sample. The camera sensor 106 is configured to capture a plurality of images of a portion of the slide having the sample placed in the slide stage 110 when the objective 108 is moving. The processor 114 stores the plurality of captured images in at least one of the data repository 103 and internal memory (not shown) of the whole slide imaging system 101. The image analysis module 116 is configured to determine a focused image from the captured plurality of images and the processor 114 is configured to determine an optimum position of the objective 108 corresponding to position at which the focused image is captured.
The processor 114 is also configured to dynamically compute a next position for moving the objective 108 based on the optimum position and a displacement. The processor 114 enables the movement of the objective 108 to the next position and controls the camera sensor 106 to capture another plurality of images repeating the process of determining focussed image, and moving to yet another next position till the end of the slide. The data repository 103 stores one or more plurality of images for each portion of the slide, one or more focused images, and an optimum position data corresponding to each of the one or more focused images determined during every repeating process. The whole slide imaging system 101 may be operated based on instructions received from the user device 102 via the network 104. In one embodiment, the user device 102 may be a mobile device or a computing device including the functionality for communicating over the network 104. For example, the mobile device can be a conventional web-enabled personal computer in the home, mobile computer (laptop, notebook or subnotebook), Smart Phone (iOS, Android), personal digital assistant, wireless electronic mail device, tablet computer or other device capable of communicating both ways over the Internet or other appropriate communications network. The user device 102 may comprise an integrated software application with a user interface that enables interaction with the whole slide imaging system 101.
Figure 2 illustrates a block diagram of the whole slide imaging system of Figure 1 in accordance with embodiments of the present disclosure.
The whole slide imaging system (hereinafter referred to as system) 101 comprises the camera sensor 106, the objective 108, the slide stage 110, the processor 114, the image analysis module 116, a control unit 202, an X-axis motor 204, a Y-axis motor 206, a Z-axis motor 208, a movement detection and computation module 218, a user interface 220, and a memory 221. The processor 114 is coupled to the control unit 202 to control the camera sensor 106 and the objective 108. The control unit 202 comprises a camera control module 222 and a motor control module 224.
The slide stage 110 may be configured to hold the slide having the sample and the objective 108 is placed at a first distance from the sample located in the slide. The sample on the slide shall be of non-uniform thickness.
The objective 108, in one embodiment, moves in a range of motion along Z-axis to a second distance such that focus lies within the range of motion. The motor control module 224 is configured to enable at least one of X-axis motor 204 and Z-axis motor 208 to move the objective 108 to the second distance along z-axis in a constant range of motion. In one example, the motor control module 224 initially enables the Z-axis motor 208 for moving the objective 108 along z- axis and further enables the X-axis motor 204 for moving the objective 108 along the x-axis. The camera control module 222 enables the camera sensor 106 coupled to the objective 108 to capture a plurality of images of at least a portion of the slide when moving the objective 108 in constant range along the Z-axis such that the focus lies in the range of motion. This is because, to image a field of view (FOV), the sample of the FOV must lie at the focal point of the objective 108 within a tolerance named Depth of Field (DOF) of objective 108. Therefore, the distance between the objective 108 and a focal plane of the sample should be constant throughout the slide. If the sample has varying thickness or if the slide is not perpendicular to the optical axis or if the slide is tilted, the objective 108 needs to be moved along the Z-axis such that the sample is always in focus. In another embodiment, the slide stage 110 moves in at least one of x-axis, y-axis, and z-axis when the objective 108 is moving in at least one of y-axis, z-axis, and x-axis, thereby creating relative motion between the slide stage 110 and the objective 108.
In another example, the motor control module 224 enables both Z-axis motor 208 and X-axis motor 204 for moving the objective 108 simultaneously in both x and z directions and the camera control module 222 enables the camera sensor 106 coupled to the objective 108 to capture a plurality of images of at least a portion of the slide when moving the objective 108 simultaneously in both x and z directions. The processor 114 stores the plurality of captured images of at least a portion of the slide in the memory 250. In one embodiment, the processor 114 is configured to determine a speed of the z-axis motor (Sz) based on at least one of maximum frame rate of the camera sensor and a depth of field of the objective. The processor 114 is also configured to determine a speed of the x-axis motor (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor 106, and an overlapping portion between two successive images, and enables the simultaneous movement of the objective 108 along both x and z directions.
In an exemplary embodiment, the speed of the Z-axis motor 208 is given by following eq. (1)
Sz = FPS*DOF/2 . (1) wherein,
FPS is the maximum frame rate in terms of frames per second of the camera; and
DOF is the depth of field of the system.
The speed of the X-axis motor 204 is given by following eq. (2) Sx < (Sz *Sn/M)*(2 - P)/(2*DZmax) (2) wherein, the Sn be the sensor size of the camera;
M be the optical magnification;
P is an overlapping portion of the two images; and DZmax is a maximum Z travel in the system.
In one implementation, the plurality of images captured during the movement of the objective 105 may be stored within the memory 250 or in the data repository 103. The images can be in any format such as, but not limited to, bitmap picture (BMP), joint photographic experts’ group (JPEG), portable network graphics (PNG), or tagged image file format (TIFF). The user interface 220 enables a user of the whole slide imaging system 101 to interact with the whole slide imaging system 101 for capturing images of the entire slide. The user interface 220 may be a graphical user interface (GUI) or buttons or a touch interface or any other similar interface that enables the user to interact with the whole slide imaging system 101.
The image analysis module 116 is configured to retrieve a plurality of images from the memory 250 and determine a focused image from the plurality of images of at least a portion of the slide. In one example, the focused image from the plurality of images is determined by processing the plurality of images to measure focus of each image, and determining the focused image based on the measured focus of each image. After the focused image is determined, the movement detection and computation module 118 is configured to estimate an optimum position corresponding to position in which the focused images is captured.
In an exemplary embodiment, the movement detection and computation module 118 is configured to determine the optimum position corresponding to the focused image based on an index of the focused image, an index of a last image in the plurality of images, and a distance moved by the objective when capturing the plurality of images of at least a portion of the slide as given by the following eq. (3) dl = (n - ni )*dz/n . (3) wherein dl is the optimum position n is an index of a last image m is an index of the focused image dz is the distance moved by the objective when capturing the plurality of images n is the number of images in the plurality of images.
The movement detection and computation module 118 is also configured to compute a next position for moving the objective based on the optimum position and a displacement required from the optimum position. In one embodiment, the movement detection and computation module 118 is configured to determine the displacement required for moving the objective 108 from the optimum position to the next position. The movement detection and computation module 118 determines the displacement based on at least one of depth of field (df), a first correction factor to adjust non-uniformity of a sample placed in the slide, and a second correction factor to adjust tilt of the slide. The second correction factor is determined based on an angle of tilt for the slide and a size of field of view (dfov), wherein the dfov is based on size and magnification of a camera sensor 106 coupled with the objective, and a field of view of the objective 108. Upon determining the displacement of the objective 108, the movement detection and computation module 118 computes the next position for moving the objective 108.
In one exemplary embodiment, the next position to move the objective is computed by the following eq. (4) d2 = di + (df + ds + stage_precision +tan(theta)*dfov) .... (4) wherein d2 is the next position dl is the optimum position df is the depth of field ds is the non-uniformity of sample dfov is the size of fov. The movement detection and computation module 118 and the image analysis module 116 iterate the steps of estimating the optimum position, computing the next position, and capturing the plurality of images of at least another portion of the slide till another portion of the slide is determined to be the end of the slide. In an exemplary embodiment, once the end of the slide is reached, the motor control module 224 enables the Y -axis motor 206 by a particular distance in an iterated manner to capture images of the entire slide.
Figure 3 illustrates exemplary steps of a method performed by the whole slide imaging system in accordance with embodiments of the present disclosure.
As illustrated in Figure 3, the method 300 comprises one or more blocks implemented by the processor 114 for enabling the objective 108 to capture the images of a slide. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
At step 302, the slide having the sample is placed in the slide stage 110 and the objective 108 is placed at a first distance from the sample located in the slide.
At step 304, a plurality of images of at least a portion of the slide is captured when moving the objective 108 to a second distance from the sample. The motor control module 224 is configured to enable at least one of X-axis motor 204 and Z-axis motor 208 to move the objective 108 to the second distance along z-axis in a constant range of motion. In one example, the motor control module 224 initially enables the z-axis motor 208 for moving the objective along z-axis and further enables the x-axis motor 204 for moving the objective 108 along the x-axis. The camera control module 222 enables the camera sensor 106 to capture the plurality of images of at least a portion of the slide when moving the objective 108 in constant range along the z-axis such that the focus lies in the range of motion.
In another example, the motor control module 224 enables both Z-axis motor 208 and X-axis motor 204 simultaneously in both x and z directions for moving the objective 108. The camera control module 222 enables the camera sensor 106 coupled to the objective 108 to capture a plurality of images of at least a portion of the slide when moving the objective 108 simultaneously in both x and z directions. In one embodiment, the processor 114 stores the plurality of images of at least a portion of the slide in the memory 250 coupled with the camera sensor 106. In one embodiment, the processor 114 is configured to determine a speed of the z-axis motor (Sz) based on at least one of maximum frame rate of the camera sensor 106 and a depth of field of the objective 108. The processor 114 is also configured to determine a speed of the x-axis motor (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor 106, and an overlapping portion between two successive images, and enables the simultaneous movement of the objective 108 along both x and z directions.
At step 306, a focused image and an optimum position of the objective having a focused image is determined. In one embodiment, an image analysis module 116 is configured to retrieve a plurality of images from the memory 250 and determine the focused image from the plurality of images of at least a portion of the slide. In one example, the focused image from the plurality of images is determined by processing the plurality of images to measure focus of each image and determining the focused image based on the measured focus of each image of the plurality of images. In one example, processing of the plurality of images includes applying a filter such as sobel filter. After the focused image is determined, the movement detection and computation module 118 is configured to estimate the optimum position of the objective 108 corresponding to the position in which the focused image is captured. In an exemplary embodiment, the movement detection and computation module 118 is configured to determine the optimum position corresponding to the focused image based on an index of the focused image, an index of a last image in the plurality of images, and a distance moved by the objective 108 when capturing the plurality of images of at least a portion of the slide.
At step 308, a next position for moving the objective 108 is computed based on the optimum position and a displacement required. The movement detection and computation module 118 is configured to determine a displacement of the objective 108 from the optimum position based on at least one of depth of field (df), a first correction factor to adjust non-uniformity of a sample placed in the slide, and a second correction factor to adjust tilt of the slide. The second correction factor is determined based on an angle of tilt for the slide and a size of field of view (dfov), wherein the dfov is based on size and magnification of the camera sensor 106 coupled with the objective 108, and a field of view of the objective 108. After the displacement of the objective 108 is determined, the movement detection and computation module 118 computes the next position.
At step 310, a plurality of images of at least another portion of the slide is captured when moving the objective to the next position. In one embodiment, a motor control module 224 enables at least one of X-axis motor 204 and Z-axis motor 208 to move the objective to the next position in an opposite direction. In one example, the motor control module 224 initially enables the Z-axis motor 208 for moving the objective along z-axis and the camera control module 222 enables the camera sensor 106 coupled to the objective to capture a plurality of images of at least a portion of the slide when moving the objective along the z-axis motion. Later, the motor control module 224 enables the x-axis motor 204 for moving the objective 108 along the x-axis. In another example, the motor control module 224 enables both Z-axis motor 208 and X-axis motor 204 for moving the objective 108 simultaneously in both x and z directions and the camera control module 222 enables the camera coupled to the objective 108 to capture a plurality of images of at least a portion of the slide when moving the objective 108 simultaneously in both x and z directions.
At step 312, a determination is made whether another portion of the slide is an end of the slide. If another portion of the slide is determined to be the end of the slide, the method moves to step 306 along the “YES” path and the steps 306 to 310 are repeated till another portion of the slide is determined to be the end of the slide; otherwise, the method proceeds to step 314 along the “NO” path. At step 314, a last focused image is determined. The image analysis module 116 is configured to determine a last focused image when the determination is made that another portion of the slide is the end of the slide.
Thus, the whole slide imaging system 101, enables dynamic movement of the objective, thereby reducing the range of motion to determine a focused image in each movement for enabling whole slide imaging of a sample.
Further, the disclosed method also reduces the time required by the objective 108 to capture images of a slide, thereby reducing the processing power, memory requirement, and power consumption.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words "comprising," "having," "containing," and "including," and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., are non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
Figure 4 illustrates a perspective view of the objective moving in the range of motion initially along the z-axis and along x-axis in accordance with embodiments of the present disclosure.
As illustrated in Figure 4, the objective 108 is initially placed at a first distance from a sample 404 contained in a slide 402. The camera sensor 106 is configured capture a plurality of images of a portion of the slide 402 when moving the objective 108 from the first distance to a second distance from the sample 404 . The processor 114 is configured to determine a focused image from the plurality of images based on measured focus of each image of the plurality of images thus captured. The processor 114 is also configured to estimate an optimum position of the objective 108 corresponding to position at which the focused image is captured. The processor 114 is also configured to compute a next position for moving the objective 108 based on the optimum position and a displacement required from the optimum position. The processor 114 further enables the camera sensor to capture a plurality of images of another portion of the slide when moving the objective 108 to the next position, wherein the objective 108 initially moves along the z-axis direction and along x-axis direction to reach the next position. The processor 114 is further configured to determine whether another portion of the slide is an end of the slide. If another portion of the slide is not the end of the slide, the steps as repeated as shown in Figure 4 until another portion of the slide is determined to be the end of the slide.
Figure 5 illustrates a perspective view of the objective moving in the range of motion along the z- axis and x-axis simultaneously in accordance with embodiments of the present disclosure.
As illustrated in Figure 5, the objective 108 is initially placed at a first distance from a sample 404 contained in the slide 402. The camera sensor 106 is configured to capture a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample. The processor 114 is configured to determine a focused image from the plurality of images based on the measured focus of each image of the plurality of images thus captured. The processor 114 is also configured to estimate an optimum position of the objective 108 corresponding to position in which the focused image is captured. The processor 114 is also configured to compute a next position for moving the objective 108 based on the optimum position and a displacement required from the optimum position. The processor 114 further enables the camera sensor to capture a plurality of images of another portion of the slide when moving the objective 108 to the next position, wherein the objective 108 moves simultaneously along the z- axis direction and x-axis direction to reach the next position. The processor 114 is further configured to determine whether another portion of the slide is an end of the slide. The processor 114 is configured to determine a speed of the z-axis motor (Sz) for moving to the next position based on at least one of maximum frame rate of the camera sensor 106 and a depth of field of the objective 108. The processor 114 is also configured to determine a speed of the x-axis motor (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor 106, and an overlapping portion between two successive images. If another portion of the slide is not the end of the slide, the steps are repeated as shown in Figure 5 until another portion of the slide is determined to be the end of the slide. The continuous movement of the objective 108 simultaneously along X-axis and Z-axis will eliminate errors caused due to the start-stop-start-stop mechanism and also reduces the time taken to capture a whole slide.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the embodiments of the disclosure is intended to be illustrative, but not limiting, of the scope of the disclosure.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. Not Furnished Upon Filing

Claims

The Claim:
1. A method of enabling whole slide imaging of a sample, the method comprising: a) placing an objective at a first distance from the sample located in a slide; b) capturing a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample; c) determining a focused image from the plurality of images based on the measured focus of each image of the plurality of images thus captured; d) estimating an optimum position of the objective corresponding to the position in which the focused image is captured; e) dynamically computing a next position for moving the objective based on the optimum position and a displacement required from the optimum position; f) capturing a plurality of images of another portion of the slide when moving the objective to the next position if another portion of the slide is determined as not the end of the slide; and g) repeating steps (c) to (f) till another portion of the slide is determined to be the end of the slide.
2. The method as claimed in claim 1, further comprising: determining a last focused image from the plurality of images based on the determination that another portion of the slide is the end of the slide.
3. The method as claimed in claim 1, wherein estimating the displacement of the objective includes estimating the displacement based on at least one of depth of field (df), a first correction factor to adjust non-uniformity of the sample, and a second correction factor to adjust tilt of the slide.
4. The method as claimed in claim 3, further comprising determining the second correction factor based on an angle of tilt for the slide and a size of field of view (dfov), wherein the dfov is computed based on size and magnification of a camera sensor coupled with the objective, and a field of view of the objective.
5. The method as claimed in claim 1, wherein moving the objective to the next position includes initially moving the objective along the z-axis direction and along x-axis direction to reach the next position.
6. The method as claimed in claim 1 , wherein moving the objective from the current optimum position to the next position includes moving the objective along x-axis direction and z-axis direction simultaneously to reach the next position.
7. The method as claimed in claim 6, wherein moving the objective moves to the next position comprising: determining a speed (Sz) based on at least one of maximum frame rate of the camera sensor and a depth of field of the objective; moving the objective along the z-axis at the determined speed (Sz) to reach the next position; determining a speed (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor, and an overlapping portion between two successive images; and simultaneously moving the objective to the next position along the X-axis at the determined speed (Sx).
8. The method as claimed in claim 1, wherein determining the focused image from the plurality of images includes: processing the plurality of images to measure focus of each image; and determining the focused image based on the measured focus of each image of the plurality of images.
9. The method as claimed in claim 1, wherein determining the optimum position of the objective includes determining the optimum position based on an index of the focused image, an index of a last image in the plurality of images, and a distance moved by the objective when capturing the plurality of images.
10. A system for enabling whole slide imaging of a sample, the system comprising: an objective configured to be placed at a first distance from the sample located in a slide; a camera sensor configured to capture a plurality of images of a portion of the slide when moving the objective from the first distance to a second distance from the sample; and a processor, coupled to the objective and the camera sensor, and is configured to: a) determine a focused image from the plurality of images based on the measured focus of each image of the plurality of images thus captured; b) estimate an optimum position of the objective corresponding to the position in which the focused image is captured; c) dynamically compute a next position for moving the objective based on the optimum position and a displacement required from the optimum position; d) capture a plurality of images of another portion of the slide when moving the objective to the next position if another portion of the slide is determined as not the end of the slide; and e) repeat steps (a) to (d) till another portion of the slide is determined to be the end of the slide.
11. The system as claimed in claim 10, wherein the processor is further configured to determine a last focused image from the plurality of images based on the determination that the next position is the end of the slide.
12. The system as claimed in claim 10, wherein the processor is configured to estimate the displacement of the objective based on at least one of depth of field (df), a first correction factor to adjust non-uniformity of the sample, and a second correction factor to adjust tilt of the slide.
13. The system as claimed in claim 12, wherein the processor is further configured to determine the second correction factor based on an angle of tilt for the slide and a size of field of view (dfov), wherein the dfov is computed based on size and magnification of a camera sensor coupled with the objective, and a field of view of the objective.
14. The system as claimed in claim 10, wherein the processor is configured to enable the objective to move initially along the z-axis direction and along the x-axis direction to reach the next position.
15. The system as claimed in claim 10, wherein the processor is configured to enable the objective to move along the x-axis and z-axis direction simultaneously to reach the next position.
16. The system as claimed in claim 15, wherein the processor is configured to: determine a speed (Sz) based on at least one of maximum frame rate of the camera sensor and a depth of field of the objective; move the objective along the z-axis at the determined speed (Sz) to reach the next position; determine a speed (Sx) based on the speed of the objective along the z-axis (Sz), and a field of view of the camera sensor, and an overlapping portion between two successive images; and simultaneously move the objective to the next position along the X-axis at the determined speed (Sx).
17. The system as claimed in claim 10, further comprises an image analysis module that is configured to: process the plurality of images to measure focus of each image; and determine the focused image based on the measured focus of each image of the plurality of images.
18. The system as claimed in claim 10, wherein the processor is configured to determine the optimum position based on an index of the focused image, an index of a last image in the plurality of images, and a distance moved by the objective when capturing the plurality of images.
PCT/IN2020/051046 2019-12-24 2020-12-24 Method and system for enabling whole slide imaging WO2021130773A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201941050779 2019-12-24
IN201941050779 2019-12-24

Publications (1)

Publication Number Publication Date
WO2021130773A1 true WO2021130773A1 (en) 2021-07-01

Family

ID=76573142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2020/051046 WO2021130773A1 (en) 2019-12-24 2020-12-24 Method and system for enabling whole slide imaging

Country Status (1)

Country Link
WO (1) WO2021130773A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7638748B2 (en) * 2005-06-22 2009-12-29 TriPath Imaging Method of capturing a focused image of a movable slide via an objective of a microscopy device
US9262836B2 (en) * 2011-10-11 2016-02-16 Acutelogic Corporation All-focused image generation method, device for same, and recording medium for same, and object height data acquisition method, device for same, and recording medium for same

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7638748B2 (en) * 2005-06-22 2009-12-29 TriPath Imaging Method of capturing a focused image of a movable slide via an objective of a microscopy device
US9262836B2 (en) * 2011-10-11 2016-02-16 Acutelogic Corporation All-focused image generation method, device for same, and recording medium for same, and object height data acquisition method, device for same, and recording medium for same

Similar Documents

Publication Publication Date Title
CN110378165B (en) Two-dimensional code identification method, two-dimensional code positioning identification model establishment method and device
US11216629B2 (en) Two-dimensional code identification and positioning
CN100550986C (en) Method and camera with a plurality of resolution
US6975352B2 (en) Apparatus and method for capturing a composite digital image with regions of varied focus and magnification
US9531962B2 (en) Image set alignment and combination processing
US10026183B2 (en) Method, system and apparatus for determining distance to an object in a scene
JP5746937B2 (en) Object tracking device
US8897539B2 (en) Using images to create measurements of structures through the videogrammetric process
CN102739980A (en) Image processing device, image processing method, and program
WO2017107770A1 (en) Method and device for correcting zoom tracking curve
US20140185882A1 (en) Image processing device, image processing method, image device, electronic equipment, and program
CN109191380A (en) Joining method, device, computer equipment and the storage medium of micro-image
US20160292873A1 (en) Image capturing apparatus and method for obtaining depth information of field thereof
WO2016161734A1 (en) Autofocusing method and device
US11803978B2 (en) Generating composite image from multiple images captured for subject
CN113596276B (en) Scanning method and system for portable electronic equipment, electronic equipment and storage medium
CN103472658A (en) Auto-focusing method
WO2021130773A1 (en) Method and system for enabling whole slide imaging
CN110830726B (en) Automatic focusing method, device, equipment and storage medium
CN113099103A (en) Method, electronic device and computer storage medium for capturing images
Kim et al. Dewarping book page spreads captured with a mobile phone camera
CN111754587B (en) Zoom lens rapid calibration method based on single-focus focusing shooting image
JP2004274254A (en) Image input apparatus
CN112465913A (en) Binocular camera-based correction method and device
JP6062483B2 (en) Digital camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20906750

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20906750

Country of ref document: EP

Kind code of ref document: A1