WO2015095912A9 - Overlapped layers in 3d capture - Google Patents

Overlapped layers in 3d capture Download PDF

Info

Publication number
WO2015095912A9
WO2015095912A9 PCT/AU2014/001148 AU2014001148W WO2015095912A9 WO 2015095912 A9 WO2015095912 A9 WO 2015095912A9 AU 2014001148 W AU2014001148 W AU 2014001148W WO 2015095912 A9 WO2015095912 A9 WO 2015095912A9
Authority
WO
WIPO (PCT)
Prior art keywords
images
overlap
area
image
alignment
Prior art date
Application number
PCT/AU2014/001148
Other languages
French (fr)
Other versions
WO2015095912A1 (en
Inventor
Dmitri Katchalov
Eric Wai-Shing Chong
Original Assignee
Canon Kabushiki Kaisha
Canon Information Systems Research Australia Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Kabushiki Kaisha, Canon Information Systems Research Australia Pty Ltd filed Critical Canon Kabushiki Kaisha
Publication of WO2015095912A1 publication Critical patent/WO2015095912A1/en
Publication of WO2015095912A9 publication Critical patent/WO2015095912A9/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching

Definitions

  • the current invention relates to a method, system and apparatus for image capture of a microscope slide, and in particular, to a microscope system where multiple views of a specimen are taken and the registration between images of the specimen is determined.
  • Virtual microscopy is a technology that gives physicians the ability to navigate and observe a biological specimen at different simulated magnifications and through different three dimensional (3D) views as though they were controlling a microscope. It can be achieved using a display device such as a computer monitor or tablet with access to a database of images of microscope images of the specimen.
  • a display device such as a computer monitor or tablet with access to a database of images of microscope images of the specimen.
  • the specimen itself is not required at the time of viewing, thereby facilitating archiving, telemedicine and education.
  • Virtual microscopy can also enable the processing of the specimen images to change the depth of field and to reveal pathological features diat would be otherwise difficult to observe by eye, for example as part of a computer aided diagnosis system.
  • Capture of images for virtual microscopy is generally performed using a high throughput slide scanner.
  • a specimen is loaded mechanically onto a stage that is moved under the microscope objective as images of different parts of the specimen are captured on a sensor.
  • Adjacent images generally have an overlap region so that the multiple images of the same specimen can be combined into a 3D volume in a computer system attached to the microscope. If me specimen movement can be controlled sufficiently accurately, these images should be able to be combined directly to give a seamless D view without any defects.
  • tins is not the case and the specimen movement and optical tolerances of the microscope introduce geometrical distortions such as errors in position and rotation of the neighbouring images.
  • Software algorithms are generally used to process the images to register both the neighbouring images at the same depth and at different depths so that there are no defects between adjoining images.
  • Microscopy is different from other mosaicking tasks in a number of important ways.
  • the specimen is typically moved by (he stage under the optics, rather than the optics being moved to capture different parts of the subject as would take place in a panorama.
  • the stage movement can be controlled very accurately by the computer and the specimen may be fixed in a substrate.
  • the microscope is used in a controlled environment - for example mounted on vibration isolation platform in a laboratory with a custom illumination set up so that the optical tolerances of the system (alignment and orientation of optical components and the stage) are very tight. Therefore, the coarse alignment of the captured tiles for mosaicking can be fairly accurate, the lighting even, and the transform between the tiles well represented by a rigid transform.
  • the scale of certain important features of a specimen can be of the order of several pixels and the features can be densely arranged over the captured tile images. This means that the required stitching accuracy for virtual microscopy is very high. Additionally, given that specimens can be loaded automatically to the microscope which can then be operated in batch mode, the processing throughput requirements are also high.
  • the image alignment and registration process compares the pixels in the overlapping regions between two neighbouring images to determine the relative deformations in the images. With the relative deformations of certain features, such features can then be aligned providing registration between the images. In some systems all pixels in the overlapping regions in both images are used to calculate this deformation. However, the speed of the process can be significantly improved by only taking measurements at small image patches within the overlap region. These patch-based techniques can be an order of magnitude faster and, additionally, when the distortions present in the image are small— as is the case in a microscope— they can be highly accurate.
  • An important step when using patch-based techniques is determining where to locate the small patches. Locating patches in areas that contain a lot of texture is important to obtain an accurate estimate of the shift between corresponding patches in different images. A problem arises when there is insufficient texture in the overlap region, which tends to occur * in specimens with sparse features.
  • a method of registering a plurality of images of a three dimensional specimen captured by a microscope comprising the steps of:
  • the area of overlap is formed along an edge of each of the captured images.
  • a centre point of overlap may be shifted between the different capture planes.
  • the first area of overlap does not overlap with the second area of overlap.
  • the method may further align a first specific image of the first set in the first capture plane with a second specific image of the second set in the second capture plane by considering at least one third alignable feature not present the overlap regions of the first and second specific images.
  • the at least one third alignable feature is one of the second alignable features and the alignment of the specific images forms part of the alignment of the two images of the first set.
  • the mediod may further comprise aligning a third specific image of the first set in the first capture plane with a fourth specific image in the second set in the second capture plane by considering at least one fourth alignable feature not present the overlap regions of the third and fourth specific images, wherein
  • the at least one fourth alignable feature is one of the second alignable features
  • the first and third specific images comprise the two images having the first area of overlap in the first capture plane
  • the second and fourth specific images comprise the two images having the second area of overlap in the second capture plane
  • the alignment of the two images of the first set derives from the alignment of the first and second specific images, the alignment of the second and fourth specific images, and the alignment of the third and fourth specific images.
  • the offset of the second area of overlap is determined based on a distribution of patches in one dimension in images along a previous capture plane.
  • FIG. 1 shows a high-level system diagram for a general microscope capture system
  • Fig. 2A is an illustration of the problem associated with aligning sparse image tiles
  • Fig. 2B is a prior-art approach to overcoming the problem with aligning sparse image tiles
  • Fig. 3 A is an illustration of the captured image tiles in a single layer of a 3D specimen
  • Fig. 3B is an illustration of the captured image tiles as used to stitch a single layer of a 3D specimen
  • Fig. 3C is an illustration of the patches selected on an image overlap region between two horizontally adjacent image tiles
  • Fig. 3D is an illustration of the shift estimate calculated between two patches in adjacent image tiles
  • Figs. 4A and 4B illustrate a 3D image stack with columns, L rows and M layers
  • Fig. 5 is a schematic flow diagram that illustrates a general overview of a method that can be used to generate a stitched image stack from image tiles captured by a microscope system according to the present disclosure
  • Fig. 6 is a schematic flow diagram that illustrates a method of overlap offset image capturing with a movable stage
  • Figs. 7A and 7B provide a comparison of a standard (prior art) image stack and an image stack captured according to the present disclosure;
  • Figs. 8A to 8D illustrate image alignment using tiles captured by overlap offset capturing;
  • Fig. 9 is a schematic flow diagram that illustrates a method of registering adjacent images
  • Fig. 10 is a schematic flow diagram illustrating a method of estimating global transforms from local shift estimates
  • Fig. 11 is a schematic flow diagram illustrating a method of determining overlap offset values
  • Fig. 12 is a schematic flow diagram illustrating a method of determining a feature gap of a sparse specimen
  • FIG. 1 is an illustration of a sensor arrangement for implementing overlap offset capturing
  • Figs. 14A and I4B illustrate an advantage of using overlap offset capturing over enlarging the overlap region
  • FIGs. 1 A and 15B collectively for a schematic block diagram representation of a general purpose computer system with which the arrangements described may be practiced.
  • Fig. I shows a high-level system diagram for a general microscope capture system 100.
  • a specimen 102 is a semi-transparent sample to be inspected.
  • the specimen 102 is suitably prepared and then fixed on a transparent slide and physically positioned on a movable stage 1 10 that is under the lens of a microscope 101.
  • the specimen 102 has a spatial extent larger than the field of view of the microscope 101 in any or all of the 3 directions X, Y , and Z.
  • the stage 110 of the microscope 102 moves to allow multiple and overlapping parts of the specimen 102 to be captured by a digital camera 103 mounted to the microscope 101.
  • the camera 103 captures one or more images at each stage location. Multiple images can be taken with different optical settings or using different types of illumination.
  • These captured images 104 are passed to a computer system 105 which can either start processing the images immediately or store them in temporary storage 106 for later processing.
  • the computer system 105 is coupled to the microscope 102 via a connection 108 to permit automated control of movement of the stage 1 10 and of the optical setting (e.g. focus) of the microscope 101.
  • the computer 105 can also be configured to control the types of illumination (not illustrated).
  • the captured tiles 104 are captured in an array such that transversely adjacent tiles have a small overlap region which gives different views of the same part of the specimen 102.
  • Fig. 3 A shows an example of image tiles 1 to LK. that have been captured in a two dimensional (2D) rectangular array of /. rows with A * columns. A portion of any two adjacent tiles, corresponding to an overlap region, are of the same part of the specimen 102.
  • the overlap region forming the area overlap between adjacent images, is formed along the edge of each of the capture images. For instance, for the horizontally adjacent tiles 1 (301) and 2 (302), the shaded regions 303 and 304 contain views of the same region of the specimen 102. That is. the shaded regions 303 and
  • the shaded edge regions 305 and 306 contain views of the same region of the specimen 102.
  • the shaded edge regions 305 and 306 form an overlap region between the tiles 301 and 307.
  • the tolerances of the microscope 101 are typically well characterised so diat the possible range of offsets and rotations are known.
  • the computer 105 determines the alignment operations required to stitch at I individual tiles together seamlessly to form a stitched image 310, as shown in Fig. 3B.
  • the display system 107 uses the stitched image 310 to display the whole or parts of this virtual slide.
  • the computer 105 can additionally process the information on the virtual slide before presentation to the user to enhance the image. This processing could take the form of changing the image appearance to give diagnostic assistance to the user, and other potential
  • the alignment operations between a pair of adjacent tiles 311 and 312 begin by identifying small image regions (e.g., 313), hereafter referred to as image patches or patches, with strong alignment features inside the overlap region 315.
  • Alignment features are natural image textures in the specimen 102 that have a local two-dimensional structure. For each patch 313 inside the overlap region 315 in the tile 31 1 , there is a
  • the patch-based alignment techniques assume that local transformation at a particular patch location can be approximated by a translation.
  • a shift estimation is performed on each patch pair (e.g., 313 and 314) to determine a set of 2D shift estimates.
  • the shift estimation process is illustrated in Fig. 3D, in which the contents of an image tile 323 is correlated with the contents of an image tile 324 to determine a 2D shift estimate (S x , S y ) that best relates the two tiles.
  • the processes of identifying strong alignment patches and performing shift estimation on those patches are repeated for all horizontally and vertically adjacent tile pairings.
  • the alignment information gathered for all pairings is then used to calculate the transforms required to stitch all the captured tiles 104 together into a seamless mosaic or stitched image 310 (Fig. 3B).
  • the stitched image 310 represents a single layer of the specimen 102 captured at a particular focal (capture) plane of the microscope 101.
  • the specimen 102 is a 3D object with features that change along the Z-direction (depth).
  • depth In order to form a 3D model of the specimen 102, multiple image tiles are captured, at each transverse stage location, over a series of depths. The set of depths should span the range of focus from one side to the other side of the specimen 102. After registration, transversely positioned tiles captured at the same focal plane form a single stitched image 310. The set of stitched images at the different depths are registered to form an aligned image stack of the specimen 1 2.
  • Fig. 4B shows an example of an image stack 420 of the specimen 102 with M layers, each having L rows and K columns.
  • the image stack 420 consists of a number of stitched image layers such as layer 430, and each stitched layer 410, seen in Fig. 4 A, is registered together using captured tiles, such as tiles 412 and 414, based on local alignment information derived from patches within overlap regions (e.g., 413 and 415) between the adjacent tiles 412, 13.
  • the 2D stitching within a single layer can be extended to 3D stitching by including tiles in the Z direction.
  • the basic idea is to make measurements on the relative transforms between two adjacent tiles in X, Y or Z, and use those measurements to estimate the absolute transforms.
  • the measurements for 2D stitching are derived from patches in an overlap region between two adjacent tiles on the same focal plane of the microscope 101. Because the overlap occurs within a single layer, it can be referred to as an intra-layer overlap.
  • a pair of adjacent tiles in the Z direction forms an inter-layer overlap, where common image features are measured for global transform estimation.
  • the intra-layer overlap region is a narrow region (e.g., 413) that is connecting two adjacent tiles in the same layer, while the inter-layer overlap region is of the same size as the image tile and the region occurs between two adjacent tiles across two layers (e.g., tile (m-l )N+l and tile mN ⁇ l ).
  • overlap refers to "intra-layer overlap", unless otherwise specified.
  • FIG. 2A provides an illustration of the alignment of two adjacen tiles 210 and 220 from a sparse specimen.
  • the tiles 210 and 220 represent adjacent captures of a specimen 102, each containing a number of biological structures 205 and blank regions 206.
  • the tiles 210 and 220 also have an overlap region, represented by the shaded regions 250 and 260.
  • a larger overlap region means that the total area of the specimen 102 captured by the two adjacent tiles 230 and 240 is less than the total area of the specimen 102 captured by the tiles 210 and 220. This can be seen in the additional biological structures 290 captured in the tile 210.
  • Figs. 14A and 14B show a prior art stitched image 1410 created using tiles with an enlarged overlap region
  • Fig. 14B shows a stitched image 1420 created with tiles captured by the overlap offset capturing method disclosed herein.
  • Stitched image 1 10 consists of a number of image tiles, such as tile 1430, where each tile overlaps with its adjacent tiles by an amount 1450.
  • stitched image 1420 formed using captured tiles 1440 of the same size as the tile 1430 but with a much smaller overlap 1460, thus the total area of stitched image 1420 is significantly larger than the total area of stitched image 1410 after capturing the same number of pixels in both cases.
  • Figs. 15 A and 15B depict a general-purpose computer system 1500, upon which the various arrangements described can be practiced as part of the microscope capture system 100 of Fig. 1.
  • the computer system 1500 includes: the computer module 105 of Fig. 1 , which has input devices such as a keyboard 1502, a mouse pointer device 1503, a scanner 1526, the camera 103, and a microphone 1580; and output devices including a printer 1515, the display device 107 and loudspeakers 1517.
  • An external Modulator- Demodulator (Modem) transceiver device 1516 may be used by the computer module 105 for communicating to and from a communications network 1520 via a connection 1521.
  • the communications network 1520 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN.
  • WAN wide-area network
  • the modem 1516 may be a traditional "dial-up" modem.
  • the modem 1516 may be a broadband modem.
  • a wireless modem may also be used for wireless connection to the communications network 1520.
  • the computer module 105 typically includes at least one processor unit 1505, and a memory unit 1506.
  • the memor unit 1 06 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM).
  • the computer module 105 also includes an number of input/output (I/O ) interfaces including: an audio-video interface 1507 that couples to the video display 107, loudspeakers 1517 and microphone 1580; an I/O interface 1513 that couples to the keyboard 1502, mouse 1503, scanner 1526, camera 103 and the stage 110 via the connection 108; and an interface 1508 for the external modem 1516 and printer 1515.
  • the modem 1516 may be
  • the computer module 105 also has a local network interface 1511, which permits coupling of the computer system 1500 via a connection 1523 to a local-area communications network 1522, known as a Local Area Network (LAN). As illustrated in Fig. 15A, the local communications network 1522 may also couple to the wide network 1520 via a connection 1524, which would typically include a so-called "firewall" device or device of similar functionality.
  • the local network interface 1511 may comprise an Ethernet circuit card, a BluetoothTM wireless arrangement or an IEEE 802.1 1 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 151 1.
  • the I/O interfaces 1508 and 1513 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated).
  • Storage devices 1509 are provided and typically include a hard disk drive (HDD) 1510. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used.
  • An optical disk drive 1 12 is typically provided to act as a non-volatile source of data.
  • Portable memory devices such optical disks (e.g., CD-ROM, DVD, Blu-ray DiscTM), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1 00.
  • the various storage devices may be used in whole or pan, and in some implementations in concert with the networks 1520 and 1522 to represent the functionality of the data storage 106 described with reference to Fig. I .
  • the components 1505 to 1513 of the computer module 105 typically communicate via an interconnected bus 1504 and in a manner that results in a conventional mode of operation of the computer system 1500 known to those in the relevant art.
  • the processor 1505 is coupled to the system bus 1504 using a connection 1518.
  • the memory 1506 and optical disk drive 1512 are coupled to the system bus 1504 by connections 1519. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple ac , or a like computer systems.
  • the methods of image registration described herein may be implemented using the computer system 1500 wherein the processes of Figs.
  • the software instructions 1531 may be formed as one or more code modules, each for performing one or more particular tasks.
  • the software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the image registration methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
  • the software may be stored in a computer readable medium, including the storage devices described below, for example.
  • the software is loaded into the computer system 1500 from the computer readable medium, and then executed by the computer system 1500.
  • a computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product.
  • the use of the computer program product in the computer system 1 00 preferably effects an advantageous apparatus for image registration within a microscope system.
  • the software 1533 is typically stored in the HDD 1510 or the memory J 506.
  • the software is loaded into the computer system 1500 from a computer readable medium, and executed by the computer system 1500.
  • the software 1533 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1525 that is read by the optical disk drive 1512.
  • a computer readable medium having such software or computer program recorded on it is a computer program product.
  • the use of the computer program product in the computer system 1 00 preferably effects an apparatus for image registration.
  • the application programs 1533 may be supplied to the user encoded on one or more CD-ROMs 1525 and read via the corresponding drive 1512, or alternatively may be read by the user from the networks 1520 or 1522. Still further, the software can also be loaded into the computer system 1500 from other computer readable media.
  • Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1500 for execution and/or processing.
  • Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu- ray TM Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 105.
  • Examples of transitory or non- tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 105 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
  • the second pari of the application programs 1533 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces ⁇ GU Is) to be rendered or otherwise represented upon the display 107.
  • GUI graphical user interface
  • a user of the computer system 1500 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s).
  • Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1517 and user voice commands input via the microphone 1580.
  • Fig. 15B is a detailed schematic block diagram of the processor 1505 and a
  • the memory 1534 represents a logical aggregation of all the memory modules (including the HDD 1509 and semiconductor memory 1506) that can be accessed by the computer module 105 in Fig. 15A.
  • a power-on self-test (POST) program 1550 executes.
  • the POST program 1550 is typically stored in a ROM 1549 of the semiconductor memory 1506 of Fig. 15A.
  • a hardware device such as the ROM 1549 storing software is sometimes referred to as firmware.
  • the POST program 1550 examines hardware within the computer module 105 to ensure proper functioning and typically checks the processor 1505, the memory 1 34 ( 1509, 1506), and a basic input-output systems software (BIOS) module 1551, also typically stored in the ROM 1549, for correct operation. Once the POST program 1550 has run successfully, the BIOS 1551 activates the hard disk drive 1510 of Fig. 15 A.
  • BIOS basic input-output systems software
  • Activation of the hard disk drive 1510 causes a bootstrap loader program 1552 that is resident on the hard disk drive 1510 to execute via the processor 1505.
  • the operating system 1553 is a system level application, executable by the processor 1505, to fulfil various high level functions, including processor management- memory management, device management, storage management, software application interface, and generic user interface.
  • the operating system 1553 manages the memory 1534 (1509, 1506) to ensure that each process or application running on the computer module 105 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1500 of Fig. 15 A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 1534 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1500 and how such is used.
  • the processor 1505 includes a number of functional modules including a control unit 1539, an arithmetic logic unit ( ALU ) 1540, and a local or internal memory 1548, sometimes called a cache memory.
  • the cache memory 1548 typically include a number of storage registers 1544 - 1546 in a register section.
  • One or more internal busses 1541 functionally interconnect these functional modules.
  • the processor 1505 typically also has one or more interfaces 1542 for communicating with external devices via the system bus 1504, using a connection 15 8.
  • the memory 1534 is coupled to the bus 1504 using a
  • connection 15
  • the application program 1533 includes a sequence of instructions 1531 that may include conditional branch and loop instructions.
  • the program 1533 may also include data 1532 which is used in execution of the program 1533.
  • the instructions 1531 and the data 1532 are stored in memory locations 1528, 1529, 1530 and 1535, 1536, 1537, respectively.
  • a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1 30.
  • an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1528 and 1529.
  • the processor 1505 is given a set of instructions which are executed therein.
  • the processor 1505 waits for a subsequent input, to which the processor 1505 reacts to by executing another set of instructions.
  • Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1502, 1503, data received from an external source across one of the networks 1520, 1502, data retrieved from one of the storage devices 1506, 1509 or data retrieved from a storage medium 1525 inserted into the corresponding reader 1512, all depicted in Fig. 15A.
  • the execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 1534.
  • the disclosed image registration arrangements use input variables 1554, which are stored in the memory 1534 in corresponding memory locations 1555, 1556, 1557.
  • the arrangements produce output variables 1561, which are stored in the memory 1534 in corresponding memory locations 1562, 1563, 1564.
  • Intermediate variables 1558 may be stored in memory locations 1559, 1560, 1566 and 1567.
  • each fetch, decode, and execute cycle comprises:
  • a further fetch, decode, and execute cycle for the next instruction may be executed.
  • a store cycle may be performed by which the control unit 1 39 stores or writes a value to a memory location 1 32.
  • Each step or sub-process in the processes of Figs. 5 to 12 is associated with one or more segments of the program 1533 and is performed by the register section 1544, 1 45, 1 47, the ALU 1540, and the control unit 1539 in the processor 1505 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1 33.
  • the methods of image registration and alignment may alternatively be implemented in dedicated hardware such as one or more integrated circuits perfonning the functions or sub functions to be described.
  • dedicated hardware may include graphic processors, digital signal processors, or one or mote microprocessors and associated memories.
  • FIG. 5 A genera] overview of a method 500 that can be used to address the problem of insufficient alignment feature described in the preceding section is shown in Fig. 5.
  • the method 500 begins at step 510, where the microscope 101 is appropriately setup to provide a desired operating environment with suitable initial optical, illumination and stage settings. The setup may be performed manually or automated via control by the computer 105.
  • an appropriate specimen 102 is loaded onto the microscope stage 1 10 such that a portion of the specimen 102 is in the field of view of the camera 103. For batch processing this is step is typically automated using a slide loader.
  • image tiles of the specimen 102 are captured in which the overlap region of a pair of adjacent tiles on a single XY layer is offset, in a direction along the XY plane, relative to the overlap region of a corresponding pair of adjacent tiles on an axially neighbouring XY layer.
  • the purpose of this overlap offset is to introduce additional alignment features by broadening the size of the effective overlap region across successive layers in the Z direction. As a result, registration problems caused by insufficient alignment feature are reduced.
  • the overlap offset capturing step 530 will be described further in detail with reference to Figs. 6, 7, 8, 1 1, 12, and 13.
  • the capture planes are substantially parallel, notwithstanding minor variations resulting from movement of the stage in the X, Y and Z directions.
  • the image tiles are captured by the overlap offset capturing step 530 they are transmitted to a computer 105.
  • the computer 105 takes all pairs of adjacent tiles in the 3D image array of Fig. 4A.
  • the computer 105 then performs an image registration step 540 where the local alignment for each of these pairs of adjacent images is determined. This process is repeated for tire next X, Y or Z adjacent pairing until all adjacent tile pairings are calculated.
  • the alignment information gathered for all pairings is then used to calculate the alignment operations required to fit all the individual tiles together into a seamless mosaic. Details of the image registration step 540 will be described in detail with reference to Fig. 9.
  • illumination correction over the tiles may be calculated at step 550.
  • the alignment operations and illumination corrections can be applied to the tiles to form a single composite stitched image stack at step 560.
  • These tile images and the operations required to align and display then represent a virtual slide.
  • the tile images, the alignment operations, and the illumination corrections can be stored or transmitted, for example via the networks 1520 and 1522, and the image generated and viewed at a later time and/or at a remote display system.
  • Fig. 7B provides a graphical illustration of the overlap offset capturing step 530 of method 500 using simplified image stacks, in comparison to a traditional capture illustrated in Fig. 7A.
  • Figs. 7A and 7B show two 2 x 1 x 3 image stacks (i.e., 2 tiles in the X direction, 1 tile in the Y direction and 3 tiles in the Z direction) 710 and 720, captured without and with overlap offset, respectively.
  • the image stack 710 is a result of a conventional image capturing technique, in which multiple tiles are captured ov er a series of depths at each transverse location.
  • tile 71 1 is captured with the camera 103 focussed at location ( ⁇ , ⁇ , z x ), then tiles 713 and 715 are captured at locations (X ,yi,z 2 ) and (*i,yi,z 3 ), respectively.
  • tiles 712, 714 and 716 are captured at locations (3 ⁇ 4» (Xz, y ⁇ , z 2 ) and ⁇ xz. yx. Zz , respectively.
  • overlap regions 717, 718 and 71 with a fixed width 740 exist between adjacent tile pairs 71 1 and 712, 713 and 714, and 715 and 716, respectively.
  • the overlap regions 717, 718 and 719 the same size, they contain biological structures at the same transverse region of the specimen 102. albeit at different depths.
  • the arrangements presently disclosed generate the image stack 720 whose layers are offset relative to adjacent layers amongst the layers /, /+1, and i+2. Given a layer of horizontally adjacent tiles, a small offset AJ is introduced to capture locations of tiles that are horizontally adjacen in a neighbouring layer. Similarly, a layer of vertically adjacent tiles, a small offset Ay is introduced to capture locations of tiles that are vertically adjacent in a neighbouring layer.
  • tile 721 is captured with the camera 103 focussed at location (* ⁇ ⁇ 3 ⁇ ' ⁇ ⁇ ) > m ⁇ n tiles 723 and 725 are captured at locations (x x + ⁇ ⁇ , ⁇ ⁇ 2 ) and (x x - Ax, y l t z 3 ), respectively.
  • tiles 722, 724 and 726 are captured at locations , respectively.
  • overlap regions 727. 728 and 729 of the same width exist between adjacent tile pairs 721 and 722, 723 and 724, and 725 and 726, respectively.
  • Fig. 7B and in contrast to the traditional approach seen in Fig.
  • the overlap regions 727, 728 and 729 of Fig. 7B contain biological structures at different transverse regions of the specimen 102. From an image registration point of view, overlap offset capturing allows alignment features within a wider transverse region 750 ( Fig. 7B) of the specimen 102 than the transverse region 740 ( Fig. 7A) to be used for local alignment measurements.
  • the approach of Fig. 7B affords the same benefits as increasing the size of the overlap region but without the cost associated with actually increasing the overlap region. This can be seen in Fig. 7B that the overlap regions 727. 728 and 729 for the overlap offset image stack 720 are the same size as those overlap regions 717, 718 and 71 of Fig. 7 A for the conventionally captured image stack 710.
  • image capturing may be performed in any order as long as corresponding overlap regions in neighbouring layers are offset relative to each other.
  • Fig. 7B is a simplified diagram to illustrate the concept of overlap offset image capturing with a very small 2 1 x 3 image stack, it is clear that the same overlap offset image capturing method can be applied to a much larger image stack (e.g., 8 x 11 10).
  • Figs. 8A-8D further illustrate the concept of overlap offset capturing with a close up view of the tiles 721, 722, 723 and 724 of Fig. 7B across two layers in the Z direction.
  • Layer (/ ' ) shown in Figs. 8A and 8B the corresponding tiles 721 and 722 have an overlap region 727 that is represented by the shaded regions 810 and 820 respectively.
  • Within the shaded regions 810 and 820 there are four pairs of image patches (e.g., 811 and 821 ). Due to the sparseness of the specimen, only one the pairs of image patches, comprising patches 814 and 824 has any feature for alignment, being feature 893.
  • the feature 893 in this example is considered to be insufficient for estimating the transform between the tiles 721 and 722 because the feature 893 is not seen to afford enough structure, and particularly enough structure with the patches 814 and 824 to discern statistically alignable features.
  • the corresponding tiles 723 and 724 are captured using the overlap offset capturing approach described above.
  • the tiles 723 and 724 have an overlap region 728 that is represented by the shaded regions 830 and 840. Within the shaded regions 830 and 840, there are four pairs of image patches (e.g., 834,844) associated with alignment features 893 and 890, allowing local alignment measurements to be performed.
  • the local alignment measurements between the patches 834 and 844, and patches 835 and 845 provide for registration of the images 723 and 724. It will be observed that the patches 834 and 844 for the second area 728 (830,840) of overlap in the Layer + J, relate to the same alignable image feature 893 associated with the patches 814 and 824 for the first area 727 (810,820) of overlap of the layer above. Layer . However, as will also be observed from Figs. 8C and 8D, the patches 835 and 845 in the second overlap area 728 (830,840) of overlap are associated with an alignable feature 890 of the sparse specimen that is not present in the patches of the first overlap area 727 (810,820) of overlap seen in Figs. 8A and 8B.
  • tile 721 overlaps substantially with tile 723, both containing biological structures at mostly the same transverse region of the specimen 102.
  • the biological structures in tile 721 may appear slightly different from the corresponding biological structures in tile 723 due to the different focal planes employed at capture time. It can be assumed that the step size in the Z direction is sufficiently small so that any changes in the biological structures will have minimal impact on registration accuracy, given the axial spread of the microscope point spread function.
  • Inter-layer registration begins by identifying a set of patches 850 (Fig. 8A) within strong alignment features (biological structures) 892, 894 and 895 in tile 721 , and a correspondingly located set of patches 851 for the same alignment features 892, 894 and 895 correspondingly present in tile 723 (Fig. 8C).
  • the alignment features 892,894 and 895 are not present in the overlap regions 810 and 830 of the tiles 721 and 723 respectively.
  • the patch pairs e.g., 850 and 851 are used to determine local alignment measurements, which are then used to estimate the relative transform between the tiles 721 and 723.
  • similar identified patch pairs e.g., 860 and 861
  • similar identified patch pairs not present in the overlap regions 821 and 844 can be correlated to derive a set of local alignment measurements for estimating the relative transform between the tiles 722 and 724.
  • the tiles 721 and 722 can be registered indirectly through local alignment measurements gathered between the adjacent tile pairs: 721 and 723, 723 and 724, and 724 and 722.
  • the overlap region 727 (810,820) does not cover the same transverse region of the specimen 102 as the overlap region 728 (830,840) : . so there is an overall larger transverse region of the specimen 102 available for image alignment. With this additional transverse region of the specimen 102, the likelihood of registration problems due to insufficient image features is reduced.
  • adjacently images captured in the same image plane can be aligned using alignable image features contained in an area of overlap between those adjacent images.
  • the patches 834-835 and 844-845 associated with the alignable features 890 and 893 can be used for aligning the image tiles 723 and 724.
  • the specific adjacent images 721 and 722 cannot be in the directly aligned in the same manner because of the absence of alignable features in the area of overlap 727 (810,820). Registration of the images 721 and 722 of the first layer / with respective image tiles 723 and 724 of the next layer i+ l can however be performed using further alignable image features that are
  • Method 600 used at step 530 to perform overlap offset capturing of image tiles of the specimen 102 will now be described in further detail below with reference to Fig. 6.
  • the method 600 is preferably implemented in software stored in the HDD 1510 and executed by the processor 1505 to control the stage 110 via the connection 108 and the camera 103 as parts of the microscope imaging system 100.
  • a double loop structure is used to control the movement of the microscope stage 1 10 in a 3D array (K columns, L rows and M layers) to suitably place each portion of the specimen 102 in turn in the field of view of the camera 103 for image acquisition.
  • the microscope 101 is operated to move the focal plane to z. This for example may be achieved by moving the stage in the z-direction, or alternatively adjusting the optics of the microscope 101.
  • the microscope stage 1 10 is moved via the connection 108 to a next transverse location x, y.
  • the camera 103 is then operated at step 640, for example via a similar control connection from the computer 105 (not illustrated in Fig. 1 ), to capture an image tile of the specimen 102.
  • the capture tiles 104 can, for example, be communicated to the computer 105 and stored in the H DD 1510, perhaps after temporary storage in the memory 1506.
  • the processor 1505 operates according to step 645 to increment the transverse capture location x,y based on a pre-defined path of scan order, via the connection 108.
  • the preferred arrangement of step 645 is for the microscope stage 1 10 to move in a tile raster (comb) order.
  • Alternative scan orders such as the boustrophedon (or meander) order for controlling the movement of the microscope stage 1 10 may be used to acquire image tiles of the specimen 102.
  • step 650 the processor 1505 checks if there are further tiles to capture on the current focal plane according to the scan path. If there are, the processing of method 600 returns to step 630 to repeat tile capture, otherwise method 600 continues to step 660.
  • step 660 the processor 1505 checks if there are further layers to capture. If there are, the processing of method 600 proceeds to step 670, otherwise the processing of method 600 euds.
  • the microscope focal plane is incremented, by one of the approaches discussed above, by a predetermined step size rfz, which corresponds to the capture plane of a next layer of image tiles of the specimen 102.
  • the microscope stage 1 30 is reset to its initial transverse location A small offset is determined and applied to the transverse location x,y at step 690, so dial:
  • each captured tile is an image of size 5120 by 3840 pixels, the overlap is 100 pixels wide, and the focal plane step size in the Z direction is I micron.
  • Method 900 used at step 540 to perform image registration on image tiles 10 captured in the overlap offset image capturing step 530 will now be described in further detail below with reference to Fig. 9.
  • the method 900 is preferably implemented in software stored in the HDD 1510 and executed by the processor 1505.
  • a loop structure is employed by the method 900 to process each pair of adjacent tiles in turn, stalling at step 920 which selects a next pair of adjacent tiles from the captured tiles 10, for example stored in the HDD 1510.
  • the captured tiles 910 are typically stored in the HDD 1510 in a 3D tile array format essentially mirroring the format of capture, an example of which is the format 720 of Fig. 7B
  • the computer 105 accesses all pairs of adjacent tiles in the 3D tile array 720 (e.g. , K x L x ) of Fig. 7B. These tiles are adjacent in one of the X, Y or Z directions.
  • the following steps of the method 900 are discussed with reference to an example with two horizontally (X) adjacent tiles that are shown in Fig. 3C.
  • the basic approach of the method 900 is to determine the distortion required to be applied to tile 312 such that the pixels in the overlap region 16 of tile 12 match the pixels in the overlap region 315 of tile 31 1.
  • the locations of small patches 313 within tile 3 ! l are calculated and selected at step 930.
  • the patch selection in tile 31 1 may be carried out in accordance with a number of different methods.
  • One method that can be applied is to base the patch selection on a grid arrangement with a fixed number of patches. For example, as in Fig. 3C, a grid of five rows and one column may be applied to determine the locations of small patches 313.
  • the locations of small patches 313 may be determined by detecting local gradient maxima using techniques such as Harris comer detector in order to minimise the transform estimation error between the tiles 31 1 and 312.
  • the locations of corresponding patches 314 in the adjacent tile 312 are then determined using an initial transform between the tiles 31 1 and 312 derived from prior knowledge (e.g., stage positions during capturing for the tiles 31 and 312), and selected. These corresponding patch locations are the locations of the patches in the first tile 311 offset by the expected offset between the tiles.
  • the specimen 102 is fixed in a rigid position and the tolerance on the optical and physical errors in the microscope 101 are known. Additionally, tolerances of the movement of the microscope stage 1 10 are well controlled, and typically cause errors of microns in shifts, and tens of milli-radians in rotation. Due to the tight tolerances, the patches can be positioned in a way that ensures a large overlap between the corresponding patches in both tiles.
  • a coarse alignment technique may be used to approximate the alignment between the images and the corresponding patch locations calculated with reference to this approximate alignment.
  • step 950 the shifts between patches are determined by a shift estimation method such as a correlation-based or gradient-based method.
  • This shift estimation is seen with reference to Fig. 3D which shows two patches from different tiles.
  • step 960 which determines whether each adjacent pair of images has been processed. Where not, the method 900 returns to step 920.
  • the alignment information gathered for all pairings is then used at step 970, which will be described in detail widi reference to Fig. 10, to estimate the transforms required to fit all the individual tiles 910 together into a seamless mosaic.
  • the number of tile pairs adjacent along the X axis is (K— 1) x L x M , similarly the number adjacent along the Y axis is K x L - 1) x M, and the number adjacent in the Z axis is K x L x (M - 1).
  • the transform between a pair of adjacent tiles can be represented by a coordinate transform such as an affine transform, a projective transform, or a rotation, scale and translation transform.
  • a coordinate transform such as an affine transform, a projective transform, or a rotation, scale and translation transform.
  • affine transform a pixel location in one tile , where x and y are the horizontal and vertical coordinate respectively, is mapped to a pixel location in another tile by the following transform,
  • shift vectors for the kth location are the 2 component vectors of the x and y shifts at the patch locations
  • the patch selection at step 930 may be performed in a number of different ways.
  • a technique based on the gradient structure tensor allows the expected variance of the measured shifts to be estimated.
  • Suitable patches for shift estimation are regions with the smallest variance.
  • the global transforms estimation process of step 970 for each image tile will now be described in detail with reference to the method 1000 of Fig. 10.
  • the input to the method 1000 at step 1010 is a set of shift estimates derived from corresponding patch locations in adjacent tiles at step 950.
  • the method 1000 begins by forming a least squares estimation framework widt the set of shift estimates at step 1010.
  • the process of forming the least squares estimation framework is described as follows:
  • the estimation framework for the uansformation parameters will be nonlinear due to the inverse even if linear estimation techniques are used. Due to this, estimates of the transform parameters are made using nonlinear least-squares framework in Cartesian coordinates.
  • the image tiles are ordered in the manner as shown in Figs. 4A and 4B.
  • This numbering scheme does not handle the case of there being different numbers of tiles in different rows or depth-layers: however, the algorithm can be adapted to this case with an appropriate change in numbering.
  • p k are the transform parameters for the tile
  • Equation (9) can be formulated as a standard nonlinear least-squares problem by writing the shifts in vector form ordered as a vector containing all x-adjacent shifts between all tiles first, then all y-adjacent shifts, finally all z-adjacent shifts. Note that this ordering is arbitrary, and other orderings could be used that may improve the speed of solution of this matrix :
  • the nonlinear least-squares framework setup at step 1010 can then be solved with the Gauss-Newton method at step 1040, which gives a solution to the parameters p by iteratively solving the linearised normal equations,
  • Jacobian The exact form of the Jacobian is complicated, however the arrangement of the block matrix can be understood by noting that each shift, , is only dependent upon the transform parameters of the adjacent tiles, p w and p v and therefore the matrix is relatively sparse.
  • the terms in the Jacobian may be generated using a computational algebra package, such as Mathematica T M .
  • the Jacobian can further be written as a block matrix with reference to the smaller Jacobians of the individual shift estimates .
  • the blocks are represented by the Jacobian of , as defined in Equation (8) as
  • two optional steps 1020 and 1030 may be applied prior to solving the least squares framework at step 1040 to improve the estimation performance.
  • the above least squares framework may suffer from poor matrix conditioning, where shift estimation errors are amplified.
  • regularisation is necessary for the solution of the projective transform estimation in the case that the patches are vertically and horizontally aligned.
  • regularisation is useful when robust estimation is used and large numbers of measurements are removed. In this case regularisation will select the transform that is closest to nominal in the degrees of freedom that are not defined by the measurements.
  • Tikhonov regularisation also known as ridge regression.
  • the goal of Tikhonov regularisation is to minimise the sum of squared differences of Equation (13) for the parameter vector estimate subject to the constraint,
  • L is the regularisation matrix and p reg is the vector of nominal transform parameters of the problem (i.e. the best a-priori estimate of the transform parameters in the absence of measurements).
  • Typical choices for the regularisation matrix L are the identity matrix, in which case the constraint is on the norm of the parameter vector itself, and a finite difference matrix, in which case the constraint is on the smoothness of the parameter vector.
  • the identity matrix is used as it is desired to find solutions close to the nominal transform parameters.
  • Equation (13) subject to Equation (17) is equivalent to minimising the following Lagrange multiplier problem, where is the Tikhonov parameter.
  • This can be solved again using the Gauss-Newton formulation which gives the linearised normal equations, where is the current iterate of the parameter estimate is the next iterate of the parameter estimate, and is the current residual.
  • the Gauss-Newton solution is solved iteratively starting from a suitable initial guess for the parameters, typically given by the nominal solution p reg . Often it is advantageous to use the Levenberg-
  • estimation of parameters can be performed in the presence of outliers with robust estimation methods at step 1030. These methods, in general, detect the outliers and either eliminate them from the problem or down-weight them in a weighted least-squares framework. [0112] -estimators are a popular class of robust estimation techniques that can easily be calculated using a weighted least-squares framework. The M-Estimator method chooses to minimize the objective function of,
  • is a constant that controls the level of outlier rejection.
  • Equation (20) is equivalent to the minimum of the following iterative reweighted least-squares problem
  • the M-estimator down-weights measurements that have a large deviation from the predicted measurements calculated using the current model estimate, being the current estimate of the M-estimator during iteration.
  • the bi-weight function has the property that measurements with residuals above the cut-off value of ⁇ are completely removed from the problem.
  • the bi- weight function is not the only function that can be used, the following function changes asymptotically from least-squares for low residuals to zero weighting for high residuals is given by,
  • can be related to the probability of outliers in the tails of the distribution.
  • the M-estimator method can be solved by use of a weighted iterative least-squares method, now described .
  • the weighted regularised least-squares problem can be written as where W is the weighting matrix.
  • the associated linearized normal equations can be derived, where is the current iterate of the parameter estimate is the next iterate of the parameter estimate, and is the current residual.
  • the problem is solved using two nested iterations at step 1040.
  • Step 1050 checks if the criterion for convergence of the least squares problem is reached, in which case the processing of method 1000 ends, otherwise processing returns to step 1040, for at least one farther iteration.
  • an offset direction index is calculated based on the current layer number m
  • N d is the total number of offset settings.
  • N d is 5 for 5 different offsets.
  • step 1120 the processor 1505 checks if the offset direction index is 1 f in which case processing moves to step 1 125, otherwise it continues to step 1130.
  • step 1125 the overlap offset ( ⁇ x, Ay) is set to (0,0) and processing continues at step 1 170.
  • Step 1130 checks if the offset direction index is 2, in which case processing moves to step 1135, otherwise it continues to step 1 140.
  • the overlap offset ( ⁇ x, Ay) is set to (x 0 ff, 0) and processing continues at step 1 170.
  • Step 1 140 checks if the offset direction index is 3, in which case processing moves to step 1 145, otherwise it continues to step 1 150.
  • the overlap offset ( ⁇ x, Ay) is set to (0, y 0 ff) and processing continues at step 1 170.
  • Step 1 150 checks if the offset direction index is 4, in which case processing moves to step 1 155, otherwise it continues to step 1 160.
  • the overlap offset ( ⁇ x, Ay) is set to (-* ⁇ >//, 0) and processing continues at step 1 1 0.
  • the overlap offset ( ⁇ x, Ay) is set to (0,—y 0 ff) and processing continues at step 1 170.
  • the overlap offset ( ⁇ x, Ay) is applied to the transverse location x, y as described above in Equation (1 ). Method 1 100 ends after step 1170.
  • x 0 f f is 100 pixels and y 0 f f is also 100 pixels.
  • Fig. 12 is a schematic flow diagram that illustrates a method 1200 of determining a suitable overlap offset size ⁇ x 0 ff or yoff) w> he used at step 690.
  • Method 1200 is an offline technique for determining statistically a maximum distance between two regions of a particular type of specimen with sufficient biological structures for image alignment. This maximum distance may be referred to as the feature gap of the specimen, and provides from the predetermination of the overlap offset size.
  • a loop structure is used in the method 1200 to analyse the sparseness of the specimen with a moving window approach in which the patch candidate count is determined at each window region across an image tile to determine the maximum region with insufficient biological structures for image alignment.
  • the processing of method 1200 begins with a tile of the specimen 102 that is for example retrieved from the HDD 1510.
  • the file can be downsampled at step 1210 to a lower resolution to enable faster computation. Typically a downsampling of 4 times is used, which the inventors have found increases the speed of computation widiout significantly reducing accuracy.
  • a window region of the same size as the overlap 316 is selected at the left border of the tile.
  • the number of patch candidates P c is determined at step 1230 by counting the number of patch locations with significant alignable features in the window region.
  • the process of determining patch candidates may be implemented by applying a Harris corner detector to the window region to generate a list of corner locations. Corner locations that have a comer strength that is greater than a predefined threshold are potential patch locat ions. The list of potential patch locations is then sorted according to their corner strength in descending order. The list of patch locations is further filtered by deleting points which are within S (minimum patch separation distance) pixels, e.g., 100 pixels, from another stronger corner. The remaining patches become the patch candidates, thus the patch count P c .
  • S minimum patch separation distance
  • a one-dimensional (ID) profile of the patch candidate count per window region along the X direction is updated at step 1240 for a previous plane.
  • Step 1250 then checks if there are further window regions to process, in which case processing returns to step 1220, otherwise the processing of method 1200 moves to step 1260.
  • the window region is moved to the right along the X direction by a predefined amount dw.
  • the size of dw depends whether the optional step 1210 was applied. In the preferred arrangement with a downsampling of 4 times, dw can be between 1 to 5 pixels.
  • the ID patch candidate count profile is analysed, regions along the 1 D profile where the patch candidate count P c is below the required patch number P are identified.
  • the region with the largest distance without sufficient alignment structure is determined, which represents the maximum feature gap between biological structures within the specimen 102.
  • the overlap offset size x of f may be set according to this feature gap.
  • the same method is applied in step 1260 in the Y direction to determine the overlap offset size y 0 ff.
  • Method 1200 may be applied a number of times to randomly selected tiles of a given type of specimen to improve measurement accuracy.
  • the determined feature gap may be used to set offset direction as in method 1 100 of Fig. 1 1. For example, if the overlap is 100 pixels wide, the overlap offset (x 0 ff or y 0 ff) >s 100 pixels, and the feature gap is about 350 pixels, then the number of offset setting N d is increased to 9 with the following sequence of offset settings:
  • the effective overlap region across the layers is about five times (500 pixels) the size as the default overlap size (100 pixels), which should cover the maximum feature gap of 350 pixels.
  • the determined feature gap may be used to set offset direction and size as in method 1 100 of Fig. 1 1. For example, if the overlap is 100 pixels wide, and the feature gap is about 350 pixels, then the number of offset setting N d may be set to 4 and the overlap offset ( x off o r yoff) set to 350 pixels, with the following sequence of offset settings:
  • FIG. 13 provides an illustration of a microscope 1300 with a suitable sensor arrangement that may be used at step 530 of the method 500.
  • This microscope 1300 includes a stage 1 10, on which a specimen 1320 is placed. Light is transmitted through the specimen 1320, then through one or more lenses (1330), split by beam splitters 1340 and 1345, and focused onto multiple sensors 1350, 1360 and 1390. An illustrative light path 1370 through the centre of the lens 1330 is shown, which is split into two ( 1372 and 1374).
  • the light path 1372 is further split into two (1376 and 1378).
  • the sensors 1350, 1360 and 1390 are arranged to focus on three different depths or three adjacent layers of the specimen 1 20.
  • the capture field of view of the sensor 1350 is offset ⁇ * , shown as 1380, in the X direction relative to the field of view of the sensor 1360.
  • the specific capture locations of the sensors 1350 and 1360 are (x* + Ax,y j ,z k ) and *i > /.3 ⁇ 4 + i), respectively.
  • the capture field of view of the sensor 1390 is offset -Ax, shown at 1395, in the X direction relative to the field of view of the sensor 1360.
  • the actual capture locations of the sensors 1590 and 1560 are - Ax,y j ,z k+2 ) and (*.. ⁇ '/, Zfc+iX respectively.
  • the method 1100 of Fig. 1 1 may be simplified because overlap offset in the X direction is built into the sensor arrangement, thus only stage movements in the Y direction are required.
  • a major benefit of this approach is the improvement in capture speed with two image tiles being capture at any one time.
  • the sensor arrangement in Fig. 13 can be extended to include an offset in the Y direction such that at each transverse location the stage 1310 with the specimen 1 20 in the field of view of the sensors, three image tiles are captured at (x t + respectively.
  • This way both overlap offsets in the X and Y directions are built into the sensor arrangement, thus removing or minimising the need to add small offsets during image capturing.
  • with additional sensors capturing speed is further increased. Note that this implementation is not illustrated in Fig. 13 as the Y direction is perpendicular to the representation of the microscope 1300.
  • the arrangements described are applicable to the computer and data processing industries, and particularly for the capture of images in digital microscopy.
  • the arrangements described afford advantages for imaging biological specimens with sparsely separated biological structures, the arrangements are generally applicable to imaging, and particularly image stitching of, microscope images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)

Abstract

A method of registering a plurality of images of a three dimensional specimen captures a first set of images on a first capture plane (z1, Layer i) including two images (721,722) having a first area (727; 810,820) of overlap. A second set of images on a second capture plane (Z2, Layer i+1) are captured including two images (723,724) having a second area (728) of overlap that is offset the first area of overlap to include in the second area of overlap a first alignable image feature not present in the first area of overlap. Images in the second set are aligned using a first alignable image feature in the second area of overlap, and images in the first set are aligned using the alignment of images of the second set and second alignable image features (892,891 ) present in images in the first set and correspondingly present in images in the second set.

Description

OVERLAPPED LAYERS IN 3D CAPTURE
REFERENCE TO RELATED PATENT APPLICATION(S)
[0001] This application claims the benefit under 35 U.S.C. §1 19 of the filing date of Australian Patent Application No. 201.3273832, filed December 23, 2013, hereby incorporated by reference in its entirety as if fully set forth herein.
TECHNICAL FIELD
[0002] The current invention relates to a method, system and apparatus for image capture of a microscope slide, and in particular, to a microscope system where multiple views of a specimen are taken and the registration between images of the specimen is determined.
BACKGROUND
[0003] Virtual microscopy is a technology that gives physicians the ability to navigate and observe a biological specimen at different simulated magnifications and through different three dimensional (3D) views as though they were controlling a microscope. It can be achieved using a display device such as a computer monitor or tablet with access to a database of images of microscope images of the specimen. There are a number of advantages of virtual microscopy over traditional microscopy. The specimen itself is not required at the time of viewing, thereby facilitating archiving, telemedicine and education. Virtual microscopy can also enable the processing of the specimen images to change the depth of field and to reveal pathological features diat would be otherwise difficult to observe by eye, for example as part of a computer aided diagnosis system.
[0004] Capture of images for virtual microscopy is generally performed using a high throughput slide scanner. A specimen is loaded mechanically onto a stage that is moved under the microscope objective as images of different parts of the specimen are captured on a sensor. Adjacent images generally have an overlap region so that the multiple images of the same specimen can be combined into a 3D volume in a computer system attached to the microscope. If me specimen movement can be controlled sufficiently accurately, these images should be able to be combined directly to give a seamless D view without any defects. Typically tins is not the case and the specimen movement and optical tolerances of the microscope introduce geometrical distortions such as errors in position and rotation of the neighbouring images. Software algorithms are generally used to process the images to register both the neighbouring images at the same depth and at different depths so that there are no defects between adjoining images.
[0005] Microscopy is different from other mosaicking tasks in a number of important ways. Firstly, the specimen is typically moved by (he stage under the optics, rather than the optics being moved to capture different parts of the subject as would take place in a panorama. The stage movement can be controlled very accurately by the computer and the specimen may be fixed in a substrate. Also, the microscope is used in a controlled environment - for example mounted on vibration isolation platform in a laboratory with a custom illumination set up so that the optical tolerances of the system (alignment and orientation of optical components and the stage) are very tight. Therefore, the coarse alignment of the captured tiles for mosaicking can be fairly accurate, the lighting even, and the transform between the tiles well represented by a rigid transform. On the other hand, the scale of certain important features of a specimen can be of the order of several pixels and the features can be densely arranged over the captured tile images. This means that the required stitching accuracy for virtual microscopy is very high. Additionally, given that specimens can be loaded automatically to the microscope which can then be operated in batch mode, the processing throughput requirements are also high.
[0006] The image alignment and registration process compares the pixels in the overlapping regions between two neighbouring images to determine the relative deformations in the images. With the relative deformations of certain features, such features can then be aligned providing registration between the images. In some systems all pixels in the overlapping regions in both images are used to calculate this deformation. However, the speed of the process can be significantly improved by only taking measurements at small image patches within the overlap region. These patch-based techniques can be an order of magnitude faster and, additionally, when the distortions present in the image are small— as is the case in a microscope— they can be highly accurate.
[0007] An important step when using patch-based techniques is determining where to locate the small patches. Locating patches in areas that contain a lot of texture is important to obtain an accurate estimate of the shift between corresponding patches in different images. A problem arises when there is insufficient texture in the overlap region, which tends to occur* in specimens with sparse features.
[0008] An existing approach used to overcome the problem of insufficient alignment texture is to increase the size of the overlap region. In many mosaicking applications, an overlap of 20- 40% is recommended. The problem with this approach is that a large overlap is wasteful with large portions of the specimen being captured and stored twice. Such redundancy in image capture leads to slow acquisition speed and high data storage requirement. Furthermore, the redundancy causes delays in processing the captured images due to extra time required to transfer and process the additional data.
[0009] A need therefore exists for efficient and effective methods of capturing images of a sparse specimen with a microscope system that substantially overcomes or at least ameliorates the problem of insufficient texture in the overlap region without using an excessively large overlap region.
SUMMARY
[0010] Arrangements are disclosed for digital microscopy where the effective area of overlap between the collection of images is increased without actually increasing the overlaps of the images, enabling the use of features over a wider region for alignment without the need to capture redundant data.
[0011 ] According to one aspect of the present disclosure there is provided a method of registering a plurality of images of a three dimensional specimen captured by a microscope, said method comprising the steps of:
capturing a first set of images on a first capture plane of the specimen, said first set including two images having a first area of overlap;
capturing a second set of images on a second capture plane of the specimen, the second capture plane being substantially parallel to the first capture plane, said second set including two images having a second area of overlap that is offset from the first area of overlap in a direction along the capture planes so as to include in the second area of overlap at least one first alignable image feature not present in the first area of overlap;
aligning the two images i the second set using the at least one first alignable image feature in the second area of overlap; and
aligning at least the two images in the first set using the alignment of the two images of the second set and second alignable image features present in each of the two images in the first set and correspondingly present in each of the two images in the second set.
[0012] Desirably, the area of overlap is formed along an edge of each of the captured images. A centre point of overlap may be shifted between the different capture planes. Most preferably the first area of overlap does not overlap with the second area of overlap.
[0013] The method may further align a first specific image of the first set in the first capture plane with a second specific image of the second set in the second capture plane by considering at least one third alignable feature not present the overlap regions of the first and second specific images. Preferably the at least one third alignable feature is one of the second alignable features and the alignment of the specific images forms part of the alignment of the two images of the first set.
[0014] In a specific implementation, the mediod may further comprise aligning a third specific image of the first set in the first capture plane with a fourth specific image in the second set in the second capture plane by considering at least one fourth alignable feature not present the overlap regions of the third and fourth specific images, wherein
the at least one fourth alignable feature is one of the second alignable features, the first and third specific images comprise the two images having the first area of overlap in the first capture plane,
the second and fourth specific images comprise the two images having the second area of overlap in the second capture plane, and
the alignment of the two images of the first set derives from the alignment of the first and second specific images, the alignment of the second and fourth specific images, and the alignment of the third and fourth specific images.
[0015] Desirably the offset of the second area of overlap is determined based on a distribution of patches in one dimension in images along a previous capture plane.
[0016] Other aspects are also disclosed. BRIEF DESCRIPTION OF THE DRAWINGS
[0017] At least one embodiment of the invention will now be described with reference to the following drawings, in which:
[0018] Fig. 1 shows a high-level system diagram for a general microscope capture system;
[0019] Fig. 2A is an illustration of the problem associated with aligning sparse image tiles;
[0020] Fig. 2B is a prior-art approach to overcoming the problem with aligning sparse image tiles;
[0021 ] Fig. 3 A is an illustration of the captured image tiles in a single layer of a 3D specimen;
[0022] Fig. 3B is an illustration of the captured image tiles as used to stitch a single layer of a 3D specimen;
[0023] Fig. 3C is an illustration of the patches selected on an image overlap region between two horizontally adjacent image tiles;
[0024] Fig. 3D is an illustration of the shift estimate calculated between two patches in adjacent image tiles;
[0025] Figs. 4A and 4B illustrate a 3D image stack with columns, L rows and M layers;
[0026] Fig. 5 is a schematic flow diagram that illustrates a general overview of a method that can be used to generate a stitched image stack from image tiles captured by a microscope system according to the present disclosure;
[0027] Fig. 6 is a schematic flow diagram that illustrates a method of overlap offset image capturing with a movable stage;
[0028] Figs. 7A and 7B provide a comparison of a standard (prior art) image stack and an image stack captured according to the present disclosure; [0029] Figs. 8A to 8D illustrate image alignment using tiles captured by overlap offset capturing;
[0030] Fig. 9 is a schematic flow diagram that illustrates a method of registering adjacent images;
[0031 ] Fig. 10 is a schematic flow diagram illustrating a method of estimating global transforms from local shift estimates;
[0032] Fig. 11 is a schematic flow diagram illustrating a method of determining overlap offset values;
[0033] Fig. 12 is a schematic flow diagram illustrating a method of determining a feature gap of a sparse specimen;
[0034] Fig. 1 is an illustration of a sensor arrangement for implementing overlap offset capturing;
[0035] Figs. 14A and I4B illustrate an advantage of using overlap offset capturing over enlarging the overlap region; and
[0036] Figs. 1 A and 15B collectively for a schematic block diagram representation of a general purpose computer system with which the arrangements described may be practiced.
DETAILED DESCRIPTION INCLUDING BEST MODE
Context
[0037] Fig. I shows a high-level system diagram for a general microscope capture system 100. A specimen 102 is a semi-transparent sample to be inspected. The specimen 102 is suitably prepared and then fixed on a transparent slide and physically positioned on a movable stage 1 10 that is under the lens of a microscope 101. The specimen 102 has a spatial extent larger than the field of view of the microscope 101 in any or all of the 3 directions X, Y , and Z. The stage 110 of the microscope 102 moves to allow multiple and overlapping parts of the specimen 102 to be captured by a digital camera 103 mounted to the microscope 101. The camera 103 captures one or more images at each stage location. Multiple images can be taken with different optical settings or using different types of illumination. These captured images 104, referred to as image tiles or tiles, are passed to a computer system 105 which can either start processing the images immediately or store them in temporary storage 106 for later processing. The computer system 105 is coupled to the microscope 102 via a connection 108 to permit automated control of movement of the stage 1 10 and of the optical setting (e.g. focus) of the microscope 101. The computer 105 can also be configured to control the types of illumination (not illustrated).
[0038] The captured tiles 104 are captured in an array such that transversely adjacent tiles have a small overlap region which gives different views of the same part of the specimen 102. Fig. 3 A shows an example of image tiles 1 to LK. that have been captured in a two dimensional (2D) rectangular array of /. rows with A* columns. A portion of any two adjacent tiles, corresponding to an overlap region, are of the same part of the specimen 102. The overlap region, forming the area overlap between adjacent images, is formed along the edge of each of the capture images. For instance, for the horizontally adjacent tiles 1 (301) and 2 (302), the shaded regions 303 and 304 contain views of the same region of the specimen 102. That is. the shaded regions 303 and
304 along the edges of the tiles 301 and 302 form an overlap region between the tiles 301 and 302. Similarly, for the vertically adjacent tiles 1 (301) and K 4- 1 (307) the shaded edge regions
305 and 306 contain views of the same region of the specimen 102. Thus, the shaded edge regions 305 and 306 form an overlap region between the tiles 301 and 307. Although the exact relationship between the captured tiles 104 is not known, the tolerances of the microscope 101 are typically well characterised so diat the possible range of offsets and rotations are known.
[0039] When required the computer 105 determines the alignment operations required to stitch at I individual tiles together seamlessly to form a stitched image 310, as shown in Fig. 3B. The display system 107 then uses the stitched image 310 to display the whole or parts of this virtual slide. The computer 105 can additionally process the information on the virtual slide before presentation to the user to enhance the image. This processing could take the form of changing the image appearance to give diagnostic assistance to the user, and other potential
enhancements.
[0040] Widi reference to Fig. 3C, the alignment operations between a pair of adjacent tiles 311 and 312 begin by identifying small image regions (e.g., 313), hereafter referred to as image patches or patches, with strong alignment features inside the overlap region 315. Alignment features are natural image textures in the specimen 102 that have a local two-dimensional structure. For each patch 313 inside the overlap region 315 in the tile 31 1 , there is a
corresponding image patch 14 in the corresponding overlap region 316 in the adjacent tile 312. The patch-based alignment techniques assume that local transformation at a particular patch location can be approximated by a translation. A shift estimation is performed on each patch pair (e.g., 313 and 314) to determine a set of 2D shift estimates. The shift estimation process is illustrated in Fig. 3D, in which the contents of an image tile 323 is correlated with the contents of an image tile 324 to determine a 2D shift estimate (Sx, Sy) that best relates the two tiles. The processes of identifying strong alignment patches and performing shift estimation on those patches are repeated for all horizontally and vertically adjacent tile pairings. The alignment information gathered for all pairings is then used to calculate the transforms required to stitch all the captured tiles 104 together into a seamless mosaic or stitched image 310 (Fig. 3B).
[0041 ] The stitched image 310 represents a single layer of the specimen 102 captured at a particular focal (capture) plane of the microscope 101. However, the specimen 102 is a 3D object with features that change along the Z-direction (depth). In order to form a 3D model of the specimen 102, multiple image tiles are captured, at each transverse stage location, over a series of depths. The set of depths should span the range of focus from one side to the other side of the specimen 102. After registration, transversely positioned tiles captured at the same focal plane form a single stitched image 310. The set of stitched images at the different depths are registered to form an aligned image stack of the specimen 1 2. The registration between the layers may be based on common image features between adjacent layers. Fig. 4B shows an example of an image stack 420 of the specimen 102 with M layers, each having L rows and K columns. Specifically, the image stack 420 consists of a number of stitched image layers such as layer 430, and each stitched layer 410, seen in Fig. 4 A, is registered together using captured tiles, such as tiles 412 and 414, based on local alignment information derived from patches within overlap regions (e.g., 413 and 415) between the adjacent tiles 412, 13.
[0042] The 2D stitching within a single layer, based on tiles in the X and Y directions, can be extended to 3D stitching by including tiles in the Z direction. The basic idea is to make measurements on the relative transforms between two adjacent tiles in X, Y or Z, and use those measurements to estimate the absolute transforms. As described above, the measurements for 2D stitching are derived from patches in an overlap region between two adjacent tiles on the same focal plane of the microscope 101. Because the overlap occurs within a single layer, it can be referred to as an intra-layer overlap. In the case of 3D stitching, a pair of adjacent tiles in the Z direction forms an inter-layer overlap, where common image features are measured for global transform estimation. The main difference between the intra-layer overlap and the inter- layer overlap is that the intra-layer overlap region is a narrow region (e.g., 413) that is connecting two adjacent tiles in the same layer, while the inter-layer overlap region is of the same size as the image tile and the region occurs between two adjacent tiles across two layers (e.g., tile (m-l )N+l and tile mN÷l ). Hereafter, "overlap" refers to "intra-layer overlap", unless otherwise specified.
[0043] Success of the registration technique as described above relies on having sufficient alignment features in overlap regions, such as the regions 413 and 415. However, there is no guarantee that texture is available in a given overlap region. The lack of alignment features in overlap regions is a major problem for stitching tiles of a sparse specimen, which can have large gaps and blank regions. Fig. 2A provides an illustration of the alignment of two adjacen tiles 210 and 220 from a sparse specimen. The tiles 210 and 220 represent adjacent captures of a specimen 102, each containing a number of biological structures 205 and blank regions 206. The tiles 210 and 220 also have an overlap region, represented by the shaded regions 250 and 260. Within the shaded regions 250 and 260, there are five pairs of image patches (e.g., 255 and 265). Due to the sparseness of the specimen, only one pair of image patches 259 and 269 has any feature for alignment, which is insufficient for estimating the transform between the tiles.
[0044] As described above in the Background section, increasing the size of the overlap region can overcome or reduce the problem of insufficient alignment features. This is illustrated in Fig. 2B where the overlap region, represented by the shaded regions 270 and 280, is
significantly larger than the overlap region (250 and 260) in Fig. 2A, thus, allowing the increased overlap region to cover more biological structures 205. This is reflected in having more patches with strong alignment features (e.g., 275 and 285). However, enlarging the overlap region comes at a cost, as a large overlap is wasteful with large portions of the specimen being captured twice. Such redundancy in capturing leads to slow acquisition speed, and high data storage and processing requirements. With the size of an image tile fixed, a larger overlap region means that the total area of the specimen 102 captured by the two adjacent tiles 230 and 240 is less than the total area of the specimen 102 captured by the tiles 210 and 220. This can be seen in the additional biological structures 290 captured in the tile 210.
[0045] The problem with using an enlarged overlap region to overcome feature gaps in specimens is further illustrated in Figs. 14A and 14B. Fig. 14A shows a prior art stitched image 1410 created using tiles with an enlarged overlap region, and Fig. 14B shows a stitched image 1420 created with tiles captured by the overlap offset capturing method disclosed herein.
Stitched image 1 10 consists of a number of image tiles, such as tile 1430, where each tile overlaps with its adjacent tiles by an amount 1450. In comparison, stitched image 1420 formed using captured tiles 1440 of the same size as the tile 1430 but with a much smaller overlap 1460, thus the total area of stitched image 1420 is significantly larger than the total area of stitched image 1410 after capturing the same number of pixels in both cases.
[0046] Figs. 15 A and 15B depict a general-purpose computer system 1500, upon which the various arrangements described can be practiced as part of the microscope capture system 100 of Fig. 1.
[0047] As seen in Fig. 15A, the computer system 1500 includes: the computer module 105 of Fig. 1 , which has input devices such as a keyboard 1502, a mouse pointer device 1503, a scanner 1526, the camera 103, and a microphone 1580; and output devices including a printer 1515, the display device 107 and loudspeakers 1517. An external Modulator- Demodulator (Modem) transceiver device 1516 may be used by the computer module 105 for communicating to and from a communications network 1520 via a connection 1521. The communications network 1520 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 1521 is a telephone line, the modem 1516 may be a traditional "dial-up" modem. Alternatively, where the connection 1521 is a high capacity (e.g., cable) connection, the modem 1516 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 1520.
[0048] The computer module 105 typically includes at least one processor unit 1505, and a memory unit 1506. For example, the memor unit 1 06 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 105 also includes an number of input/output (I/O ) interfaces including: an audio-video interface 1507 that couples to the video display 107, loudspeakers 1517 and microphone 1580; an I/O interface 1513 that couples to the keyboard 1502, mouse 1503, scanner 1526, camera 103 and the stage 110 via the connection 108; and an interface 1508 for the external modem 1516 and printer 1515. In some implementations, the modem 1516 may be
incorporated within the computer module 105, for example within the interface 1508. The computer module 105 also has a local network interface 1511, which permits coupling of the computer system 1500 via a connection 1523 to a local-area communications network 1522, known as a Local Area Network (LAN). As illustrated in Fig. 15A, the local communications network 1522 may also couple to the wide network 1520 via a connection 1524, which would typically include a so-called "firewall" device or device of similar functionality. The local network interface 1511 may comprise an Ethernet circuit card, a Bluetooth™ wireless arrangement or an IEEE 802.1 1 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 151 1.
[0049] The I/O interfaces 1508 and 1513 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1509 are provided and typically include a hard disk drive (HDD) 1510. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1 12 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1 00. The various storage devices may be used in whole or pan, and in some implementations in concert with the networks 1520 and 1522 to represent the functionality of the data storage 106 described with reference to Fig. I .
[0050] The components 1505 to 1513 of the computer module 105 typically communicate via an interconnected bus 1504 and in a manner that results in a conventional mode of operation of the computer system 1500 known to those in the relevant art. For example, the processor 1505 is coupled to the system bus 1504 using a connection 1518. Likewise, the memory 1506 and optical disk drive 1512 are coupled to the system bus 1504 by connections 1519. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple ac, or a like computer systems. [0051 ] The methods of image registration described herein may be implemented using the computer system 1500 wherein the processes of Figs. 5 to 12, to be described, may be implemented as one or more software application programs 1533 executable within the computer system 1500. In particular, the steps of the methods of image registration are effected by instructions 1531 (see Fig. 15B) in the software 1533 that are carried out within the computer system 1500. The software instructions 1531 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the image registration methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0052] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 1500 from the computer readable medium, and then executed by the computer system 1500. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1 00 preferably effects an advantageous apparatus for image registration within a microscope system.
[0053] The software 1533 is typically stored in the HDD 1510 or the memory J 506. The software is loaded into the computer system 1500 from a computer readable medium, and executed by the computer system 1500. Thus, for example, the software 1533 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1525 that is read by the optical disk drive 1512. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 1 00 preferably effects an apparatus for image registration.
[0054] In some instances, the application programs 1533 may be supplied to the user encoded on one or more CD-ROMs 1525 and read via the corresponding drive 1512, or alternatively may be read by the user from the networks 1520 or 1522. Still further, the software can also be loaded into the computer system 1500 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1500 for execution and/or processing.
Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu- rayTM Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 105. Examples of transitory or non- tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 105 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[0055] The second pari of the application programs 1533 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces <GU Is) to be rendered or otherwise represented upon the display 107. Through manipulation of typically the keyboard 1 02 and the mouse 1503, a user of the computer system 1500 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1517 and user voice commands input via the microphone 1580.
[0056] Fig. 15B is a detailed schematic block diagram of the processor 1505 and a
"memory" 1534. The memory 1534 represents a logical aggregation of all the memory modules (including the HDD 1509 and semiconductor memory 1506) that can be accessed by the computer module 105 in Fig. 15A.
[0057] When the computer module 105 is initially powered up, a power-on self-test (POST) program 1550 executes. The POST program 1550 is typically stored in a ROM 1549 of the semiconductor memory 1506 of Fig. 15A. A hardware device such as the ROM 1549 storing software is sometimes referred to as firmware. The POST program 1550 examines hardware within the computer module 105 to ensure proper functioning and typically checks the processor 1505, the memory 1 34 ( 1509, 1506), and a basic input-output systems software (BIOS) module 1551, also typically stored in the ROM 1549, for correct operation. Once the POST program 1550 has run successfully, the BIOS 1551 activates the hard disk drive 1510 of Fig. 15 A. Activation of the hard disk drive 1510 causes a bootstrap loader program 1552 that is resident on the hard disk drive 1510 to execute via the processor 1505. This loads an operating system 1553 into the RAM memory 1506, upon which the operating system 1553 commences operation. The operating system 1553 is a system level application, executable by the processor 1505, to fulfil various high level functions, including processor management- memory management, device management, storage management, software application interface, and generic user interface.
[0058] The operating system 1553 manages the memory 1534 (1509, 1506) to ensure that each process or application running on the computer module 105 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1500 of Fig. 15 A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 1534 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1500 and how such is used.
[0059] As shown in Fig. 15B, the processor 1505 includes a number of functional modules including a control unit 1539, an arithmetic logic unit ( ALU ) 1540, and a local or internal memory 1548, sometimes called a cache memory. The cache memory 1548 typically include a number of storage registers 1544 - 1546 in a register section. One or more internal busses 1541 functionally interconnect these functional modules. The processor 1505 typically also has one or more interfaces 1542 for communicating with external devices via the system bus 1504, using a connection 15 8. The memory 1534 is coupled to the bus 1504 using a
connection 15 .
[0060] The application program 1533 includes a sequence of instructions 1531 that may include conditional branch and loop instructions. The program 1533 may also include data 1532 which is used in execution of the program 1533. The instructions 1531 and the data 1532 are stored in memory locations 1528, 1529, 1530 and 1535, 1536, 1537, respectively. Depending upon the relative size of the instructions 1531 and the memory locations 1528-1530, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1 30. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1528 and 1529. [0061 ] in general, the processor 1505 is given a set of instructions which are executed therein. The processor 1505 waits for a subsequent input, to which the processor 1505 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1502, 1503, data received from an external source across one of the networks 1520, 1502, data retrieved from one of the storage devices 1506, 1509 or data retrieved from a storage medium 1525 inserted into the corresponding reader 1512, all depicted in Fig. 15A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 1534.
[0062] The disclosed image registration arrangements use input variables 1554, which are stored in the memory 1534 in corresponding memory locations 1555, 1556, 1557. The arrangements produce output variables 1561, which are stored in the memory 1534 in corresponding memory locations 1562, 1563, 1564. Intermediate variables 1558 may be stored in memory locations 1559, 1560, 1566 and 1567.
[0063] Referring to the processor 1505 of Fig. 15B, the registers 1544, 1545, 1546, the arithmetic logic unit ( ALU) 1540, and the control unit 1539 work together to perform sequences of micro-operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 1533. Each fetch, decode, and execute cycle comprises:
(i) a fetch operation, which fetches or reads an instruction 1531 from a memory location 1528, 1529, 1530;
(ii) a decode operation in which the control unit 1539 determines which instruction has been fetched; and
(Hi) an execute operation in which the control unit 1 39 and/or the ALU 1540 execute the instruction.
[0064] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1 39 stores or writes a value to a memory location 1 32.
[0065] Each step or sub-process in the processes of Figs. 5 to 12 is associated with one or more segments of the program 1533 and is performed by the register section 1544, 1 45, 1 47, the ALU 1540, and the control unit 1539 in the processor 1505 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1 33.
[0066] The methods of image registration and alignment may alternatively be implemented in dedicated hardware such as one or more integrated circuits perfonning the functions or sub functions to be described. Such dedicated hardware may include graphic processors, digital signal processors, or one or mote microprocessors and associated memories.
Overview
[0067] A genera] overview of a method 500 that can be used to address the problem of insufficient alignment feature described in the preceding section is shown in Fig. 5. The method 500 begins at step 510, where the microscope 101 is appropriately setup to provide a desired operating environment with suitable initial optical, illumination and stage settings. The setup may be performed manually or automated via control by the computer 105. At a following step 520, an appropriate specimen 102 is loaded onto the microscope stage 1 10 such that a portion of the specimen 102 is in the field of view of the camera 103. For batch processing this is step is typically automated using a slide loader. At step 530, image tiles of the specimen 102 are captured in which the overlap region of a pair of adjacent tiles on a single XY layer is offset, in a direction along the XY plane, relative to the overlap region of a corresponding pair of adjacent tiles on an axially neighbouring XY layer. The purpose of this overlap offset is to introduce additional alignment features by broadening the size of the effective overlap region across successive layers in the Z direction. As a result, registration problems caused by insufficient alignment feature are reduced. The overlap offset capturing step 530 will be described further in detail with reference to Figs. 6, 7, 8, 1 1, 12, and 13. The capture planes are substantially parallel, notwithstanding minor variations resulting from movement of the stage in the X, Y and Z directions.
[0068] Once the image tiles are captured by the overlap offset capturing step 530 they are transmitted to a computer 105. At step 540 the computer 105 takes all pairs of adjacent tiles in the 3D image array of Fig. 4A. The computer 105 then performs an image registration step 540 where the local alignment for each of these pairs of adjacent images is determined. This process is repeated for tire next X, Y or Z adjacent pairing until all adjacent tile pairings are calculated. The alignment information gathered for all pairings is then used to calculate the alignment operations required to fit all the individual tiles together into a seamless mosaic. Details of the image registration step 540 will be described in detail with reference to Fig. 9. Optionally, illumination correction over the tiles may be calculated at step 550. After this step the alignment operations and illumination corrections can be applied to the tiles to form a single composite stitched image stack at step 560. These tile images and the operations required to align and display then represent a virtual slide. Alternatively the tile images, the alignment operations, and the illumination corrections can be stored or transmitted, for example via the networks 1520 and 1522, and the image generated and viewed at a later time and/or at a remote display system.
[0069] Fig. 7B provides a graphical illustration of the overlap offset capturing step 530 of method 500 using simplified image stacks, in comparison to a traditional capture illustrated in Fig. 7A. Figs. 7A and 7B show two 2 x 1 x 3 image stacks (i.e., 2 tiles in the X direction, 1 tile in the Y direction and 3 tiles in the Z direction) 710 and 720, captured without and with overlap offset, respectively. The image stack 710 is a result of a conventional image capturing technique, in which multiple tiles are captured ov er a series of depths at each transverse location. Suppose, tile 71 1 is captured with the camera 103 focussed at location (χχ,γχ, zx), then tiles 713 and 715 are captured at locations (X ,yi,z2) and (*i,yi,z3), respectively. At a next transverse location, tiles 712, 714 and 716 are captured at locations (¾» (Xz, y\, z2) and {xz. yx. Zz , respectively. As a result of the conventional capturing, overlap regions 717, 718 and 71 with a fixed width 740 exist between adjacent tile pairs 71 1 and 712, 713 and 714, and 715 and 716, respectively. Not only are the overlap regions 717, 718 and 719 the same size, they contain biological structures at the same transverse region of the specimen 102. albeit at different depths.
[0070] The arrangements presently disclosed generate the image stack 720 whose layers are offset relative to adjacent layers amongst the layers /, /+1, and i+2. Given a layer of horizontally adjacent tiles, a small offset AJ is introduced to capture locations of tiles that are horizontally adjacen in a neighbouring layer. Similarly, a layer of vertically adjacent tiles, a small offset Ay is introduced to capture locations of tiles that are vertically adjacent in a neighbouring layer. Suppose, tile 721 is captured with the camera 103 focussed at location (*ι<3Ί'ζι)> m^n tiles 723 and 725 are captured at locations (xx + Δχ,γ ζ2) and (xx - Ax, yl t z3), respectively. At a next transverse location, tiles 722, 724 and 726 are captured at locations
Figure imgf000020_0001
, respectively. As a result of the overlap offset capturing, overlap regions 727. 728 and 729 of the same width exist between adjacent tile pairs 721 and 722, 723 and 724, and 725 and 726, respectively. Significantly, as will be appreciated from Fig. 7B, and in contrast to the traditional approach seen in Fig. 7A, the areas of overlap 727, 728 and 729 within the corresponding capture planes, do not themselves overlap across the various capture planes. In contrast to the conventional capturing method of Fig. 7 A, the overlap regions 727, 728 and 729 of Fig. 7B contain biological structures at different transverse regions of the specimen 102. From an image registration point of view, overlap offset capturing allows alignment features within a wider transverse region 750 ( Fig. 7B) of the specimen 102 than the transverse region 740 ( Fig. 7A) to be used for local alignment measurements. The approach of Fig. 7B affords the same benefits as increasing the size of the overlap region but without the cost associated with actually increasing the overlap region. This can be seen in Fig. 7B that the overlap regions 727. 728 and 729 for the overlap offset image stack 720 are the same size as those overlap regions 717, 718 and 71 of Fig. 7 A for the conventionally captured image stack 710.
[0071 ] In the above descriptions, image capturing may be performed in any order as long as corresponding overlap regions in neighbouring layers are offset relative to each other.
Furthermore, Fig. 7B is a simplified diagram to illustrate the concept of overlap offset image capturing with a very small 2 1 x 3 image stack, it is clear that the same overlap offset image capturing method can be applied to a much larger image stack (e.g., 8 x 11 10).
[0072] Figs. 8A-8D further illustrate the concept of overlap offset capturing with a close up view of the tiles 721, 722, 723 and 724 of Fig. 7B across two layers in the Z direction. In the first layer, Layer (/') shown in Figs. 8A and 8B, the corresponding tiles 721 and 722 have an overlap region 727 that is represented by the shaded regions 810 and 820 respectively. Within the shaded regions 810 and 820, there are four pairs of image patches (e.g., 811 and 821 ). Due to the sparseness of the specimen, only one the pairs of image patches, comprising patches 814 and 824 has any feature for alignment, being feature 893. The feature 893 in this example is considered to be insufficient for estimating the transform between the tiles 721 and 722 because the feature 893 is not seen to afford enough structure, and particularly enough structure with the patches 814 and 824 to discern statistically alignable features. In the next layer (/+1 ) shown in Figs. 8C and 8D, the corresponding tiles 723 and 724 are captured using the overlap offset capturing approach described above. The tiles 723 and 724 have an overlap region 728 that is represented by the shaded regions 830 and 840. Within the shaded regions 830 and 840, there are four pairs of image patches (e.g., 834,844) associated with alignment features 893 and 890, allowing local alignment measurements to be performed. The local alignment measurements between the patches 834 and 844, and patches 835 and 845 provide for registration of the images 723 and 724. It will be observed that the patches 834 and 844 for the second area 728 (830,840) of overlap in the Layer + J, relate to the same alignable image feature 893 associated with the patches 814 and 824 for the first area 727 (810,820) of overlap of the layer above. Layer . However, as will also be observed from Figs. 8C and 8D, the patches 835 and 845 in the second overlap area 728 (830,840) of overlap are associated with an alignable feature 890 of the sparse specimen that is not present in the patches of the first overlap area 727 (810,820) of overlap seen in Figs. 8A and 8B.
[0073] As described above, in order to create the 3D image stack 420 it is necessary to perform registration between the layers by aligning patches in the inter-layer overlap regions. In Figs. 8 A to 8D, tile 721 overlaps substantially with tile 723, both containing biological structures at mostly the same transverse region of the specimen 102. However, the biological structures in tile 721 may appear slightly different from the corresponding biological structures in tile 723 due to the different focal planes employed at capture time. It can be assumed that the step size in the Z direction is sufficiently small so that any changes in the biological structures will have minimal impact on registration accuracy, given the axial spread of the microscope point spread function.
[0074] Inter-layer registration begins by identifying a set of patches 850 (Fig. 8A) within strong alignment features (biological structures) 892, 894 and 895 in tile 721 , and a correspondingly located set of patches 851 for the same alignment features 892, 894 and 895 correspondingly present in tile 723 (Fig. 8C). Notably the alignment features 892,894 and 895 are not present in the overlap regions 810 and 830 of the tiles 721 and 723 respectively. The patch pairs (e.g., 850 and 851) are used to determine local alignment measurements, which are then used to estimate the relative transform between the tiles 721 and 723. Similarly, for the tiles 722 and 724, similar identified patch pairs (e.g., 860 and 861), not present in the overlap regions 821 and 844 can be correlated to derive a set of local alignment measurements for estimating the relative transform between the tiles 722 and 724. Significantly, while direct registration between the tiles 721 and 722 cannot be performed due to the lack of alignment features in the overlap region 727 (810,820), the tiles 721 and 722 can be registered indirectly through local alignment measurements gathered between the adjacent tile pairs: 721 and 723, 723 and 724, and 724 and 722. In this example, the overlap region 727 (810,820) does not cover the same transverse region of the specimen 102 as the overlap region 728 (830,840):. so there is an overall larger transverse region of the specimen 102 available for image alignment. With this additional transverse region of the specimen 102, the likelihood of registration problems due to insufficient image features is reduced.
[0075] According to the present disclosure, adjacently images captured in the same image plane can be aligned using alignable image features contained in an area of overlap between those adjacent images. As seen in Figs. 8C and 8D, the patches 834-835 and 844-845 associated with the alignable features 890 and 893 can be used for aligning the image tiles 723 and 724. The specific adjacent images 721 and 722 cannot be in the directly aligned in the same manner because of the absence of alignable features in the area of overlap 727 (810,820). Registration of the images 721 and 722 of the first layer / with respective image tiles 723 and 724 of the next layer i+ l can however be performed using further alignable image features that are
correspondingly present in the tiles 721-724. hi the example of Figs. 8A-8D, those further image features are seen to be those associated with the patches 850.851 and 860,861. in this fashion, the specific image 721 can be aligned with the image 723 via the features 892,894 and 895 for example. Similarly the image 722 can be aligned with the image 724 via the feature 891. As such, since three forms of alignment now exist between pairs of images (721 ,723; 723,724; 722,724), the remaining alignment between the images 721 and 722 can be
determined.
First Implementation
[0076] Method 600, used at step 530 to perform overlap offset capturing of image tiles of the specimen 102 will now be described in further detail below with reference to Fig. 6. The method 600 is preferably implemented in software stored in the HDD 1510 and executed by the processor 1505 to control the stage 110 via the connection 108 and the camera 103 as parts of the microscope imaging system 100. A double loop structure is used to control the movement of the microscope stage 1 10 in a 3D array (K columns, L rows and M layers) to suitably place each portion of the specimen 102 in turn in the field of view of the camera 103 for image acquisition. [0077] Method 600 begins at step 610 where the processor 1505 causes initialising of the microscope stage 1 10 to a default initial capture location x,y,z = (x0ly0, z0). At step 620, the microscope 101 is operated to move the focal plane to z. This for example may be achieved by moving the stage in the z-direction, or alternatively adjusting the optics of the microscope 101. At step 630, the microscope stage 1 10 is moved via the connection 108 to a next transverse location x, y. The camera 103 is then operated at step 640, for example via a similar control connection from the computer 105 (not illustrated in Fig. 1 ), to capture an image tile of the specimen 102. The capture tiles 104 can, for example, be communicated to the computer 105 and stored in the H DD 1510, perhaps after temporary storage in the memory 1506. After image capture, the processor 1505 operates according to step 645 to increment the transverse capture location x,y based on a pre-defined path of scan order, via the connection 108. The preferred arrangement of step 645 is for the microscope stage 1 10 to move in a tile raster (comb) order. Alternative scan orders such as the boustrophedon (or meander) order for controlling the movement of the microscope stage 1 10 may be used to acquire image tiles of the specimen 102.
[0078] In step 650, the processor 1505 checks if there are further tiles to capture on the current focal plane according to the scan path. If there are, the processing of method 600 returns to step 630 to repeat tile capture, otherwise method 600 continues to step 660. At step 660, the processor 1505 checks if there are further layers to capture. If there are, the processing of method 600 proceeds to step 670, otherwise the processing of method 600 euds.
[0079] At step 670, the microscope focal plane is incremented, by one of the approaches discussed above, by a predetermined step size rfz, which corresponds to the capture plane of a next layer of image tiles of the specimen 102. At step 680, the microscope stage 1 30 is reset to its initial transverse location
Figure imgf000023_0002
A small offset
Figure imgf000023_0003
is determined and applied to the transverse location x,y at step 690, so dial:
Figure imgf000023_0001
[0080] Shifting the initial transverse capture location of a layer by (Δχ,Δγ) has the effect of shifting the entire layer of image tiles by the same amount. That is, the entire layer is offset with respect to the previous capture layer. In other words, overlap regions are duly offset by (Δχ,Δγ) relative to their corresponding counterparts in the previous layer. In this fashion, a centre point of overlap between image tiles is shifted between different planes of image capture. This step will be described in further detail below with reference to method 1 100 and Fig. 1 1. After setting the initial capture location x, y, z of a new layer of image tiles in the steps 670, 680 and 690, method 600 of overlap offset image capturing repeats processing from step 620 in the manner described above.
[0081 ] In a preferred arrangement, each captured tile is an image of size 5120 by 3840 pixels, the overlap is 100 pixels wide, and the focal plane step size in the Z direction is I micron.
[0082] Method 900, used at step 540 to perform image registration on image tiles 10 captured in the overlap offset image capturing step 530 will now be described in further detail below with reference to Fig. 9. The method 900 is preferably implemented in software stored in the HDD 1510 and executed by the processor 1505. A loop structure is employed by the method 900 to process each pair of adjacent tiles in turn, stalling at step 920 which selects a next pair of adjacent tiles from the captured tiles 10, for example stored in the HDD 1510. The captured tiles 910 are typically stored in the HDD 1510 in a 3D tile array format essentially mirroring the format of capture, an example of which is the format 720 of Fig. 7B
[0083] At step 920 the computer 105 accesses all pairs of adjacent tiles in the 3D tile array 720 (e.g. , K x L x ) of Fig. 7B. These tiles are adjacent in one of the X, Y or Z directions. The following steps of the method 900 are discussed with reference to an example with two horizontally (X) adjacent tiles that are shown in Fig. 3C. The basic approach of the method 900 is to determine the distortion required to be applied to tile 312 such that the pixels in the overlap region 16 of tile 12 match the pixels in the overlap region 315 of tile 31 1.
[0084] This is performed as follows: The locations of small patches 313 within tile 3 ! l are calculated and selected at step 930. The patch selection in tile 31 1 may be carried out in accordance with a number of different methods. One method that can be applied is to base the patch selection on a grid arrangement with a fixed number of patches. For example, as in Fig. 3C, a grid of five rows and one column may be applied to determine the locations of small patches 313. In an alternate method, the locations of small patches 313 may be determined by detecting local gradient maxima using techniques such as Harris comer detector in order to minimise the transform estimation error between the tiles 31 1 and 312. [ At step 940 the locations of corresponding patches 314 in the adjacent tile 312 are then determined using an initial transform between the tiles 31 1 and 312 derived from prior knowledge (e.g., stage positions during capturing for the tiles 31 and 312), and selected. These corresponding patch locations are the locations of the patches in the first tile 311 offset by the expected offset between the tiles.
[0085] The specimen 102 is fixed in a rigid position and the tolerance on the optical and physical errors in the microscope 101 are known. Additionally, tolerances of the movement of the microscope stage 1 10 are well controlled, and typically cause errors of microns in shifts, and tens of milli-radians in rotation. Due to the tight tolerances, the patches can be positioned in a way that ensures a large overlap between the corresponding patches in both tiles.
Alternatively, a coarse alignment technique may be used to approximate the alignment between the images and the corresponding patch locations calculated with reference to this approximate alignment.
[0086] Next in step 950, the shifts between patches are determined by a shift estimation method such as a correlation-based or gradient-based method. This shift estimation is seen with reference to Fig. 3D which shows two patches from different tiles. The shift is the vector s = [sx, Sy] of the amount in the horizontal and vertical axes that the patch 324 from tile 312 must be offset from the patch 323 from tile 3 J I to make the area where the patches overlap the most simitar.
[0087] This process of selecting small patches and estimating local shifts for each adjacent tile pair is repeated for the next X (columns), Y (rows) or Z (layers) of the adjacent pairing, until all adjacent tile pairings are calculated. This is assessed at step 960 which determines whether each adjacent pair of images has been processed. Where not, the method 900 returns to step 920. The alignment information gathered for all pairings is then used at step 970, which will be described in detail widi reference to Fig. 10, to estimate the transforms required to fit all the individual tiles 910 together into a seamless mosaic. The number of tile pairs adjacent along the X axis is (K— 1) x L x M , similarly the number adjacent along the Y axis is K x L - 1) x M, and the number adjacent in the Z axis is K x L x (M - 1).
[0088] The transform between a pair of adjacent tiles can be represented by a coordinate transform such as an affine transform, a projective transform, or a rotation, scale and translation transform. In order to estimate the parameters of the coordinate transformation that maps the pixels in one tile to pixels in another tile, local shifts of small patches are measured. In the particular case of the affine transform a pixel location in one tile
Figure imgf000026_0010
, where x and y are the horizontal and vertical coordinate respectively, is mapped to a pixel location in another tile
Figure imgf000026_0009
by the following transform,
Figure imgf000026_0001
[0089] These affine parameters can be expressed as a transform parameter vector p given by Equation (3)
Figure imgf000026_0002
[0090] In the case of the projective transform, a pixel location in one tile
Figure imgf000026_0011
, where x and y are the horizontal and vertical coordinate respectively, is mapped to a pixel location in another tile
Figure imgf000026_0008
by the following transform,
Figure imgf000026_0003
[00 1 ] These projective parameters can be expressed as a transform parameter matrix Hm given by the Equation (5)
Figure imgf000026_0004
[0092] It is assumed that the shift between the location of a pixel in tile I and the transformed location of the corresponding pixel in tile 2 for each of/; locations in the tile can be measured. The shift of a pixel at a location xk is given by and the vector of shift estimates
Figure imgf000026_0007
for all n patches given by Equation (6).
Figure imgf000026_0005
where the shift vectors for the kth location are the 2 component vectors of the x and y shifts at the patch locations
Figure imgf000026_0006
[0093] The patch selection at step 930 may be performed in a number of different ways. In particular, a technique based on the gradient structure tensor allows the expected variance of the measured shifts to be estimated. Suitable patches for shift estimation are regions with the smallest variance.
[0094] Alternatively, other methods can be used to choose suitable patch locations. These may include using the Harris corner detector, kd-tree based adaptive gridding methods, or fast disk- covering non-maximal suppression methods.
[0095] Using the patch-shift estimates and the mathematical framework described here, it is possible to calculate a relative transform between any two adjacent tiles in the 3D array of captured tiles. The global transforms can be then estimated using measurements on the relative transforms between adjacent image tiles with a bundle adjustment method, which finds the optimal global transforms that best fit the pair-wise measurements. In particular, a bundle adjustment method based on a least-squares approach is preferred as such permits ready application of robust estimation techniques as well as significantly reducing the impact of outliers in the shift estimates.
[0096] The global transforms estimation process of step 970 for each image tile will now be described in detail with reference to the method 1000 of Fig. 10. The input to the method 1000 at step 1010 is a set of shift estimates derived from corresponding patch locations in adjacent tiles at step 950. The method 1000 begins by forming a least squares estimation framework widt the set of shift estimates at step 1010. The process of forming the least squares estimation framework is described as follows:
[0097] For the 3D stitching problem, it is desired to calculate the global tile transforms T for each tile captured using the correspondences between each adjacent tile. For 3D stitching the correspondences are now in the X, Y, and Z directions. Measurements are made on the relative transforms between two adjacent tiles and used to estimate the absolute transforms.
Mathematically, when measurements are made in tile it compared to tile v, what is being measured is the relative transform from tile u to tile v. The relative tile transforms can again be written in terms of the absolute transforms that are specified by their transform parameters pM and p„ as follows,
Figure imgf000028_0001
where xu is the coordinate in tile u and x„ is the corresponding coordinate location in tile v.
[0098] The estimation framework for the uansformation parameters will be nonlinear due to the inverse even if linear estimation techniques are used. Due to this, estimates of the transform parameters are made using nonlinear least-squares framework in Cartesian coordinates.
[0099] To form a matrix problem for the least-squares solution of the shifts, the image tiles are ordered in the manner as shown in Figs. 4A and 4B. The image stack 420 has L rows and columns for each of the M layers. f a particular tile is labelled as k, then the comparison of horizontally-adjacent tiles occurs with tiles numbered k and k+1 , vertically-adjacent tiles are numbered k and k+K. and depth-adjacent tiles are numbered k and k+N, where is the number of image tiles in a row and N(= L) is the number of image tiles in a depdi layer. This numbering scheme does not handle the case of there being different numbers of tiles in different rows or depth-layers: however, the algorithm can be adapted to this case with an appropriate change in numbering.
[0100] It is now possible to set up the problem when given shifts, s„„, measured in tile u at a location xu with reference to a location x„ in tile v. Equation (7) can then be written
Figure imgf000028_0002
where is the nonlinear function that gives the shift between coordinates xu
Figure imgf000028_0003
and xv in tiles u and v respectively, pk are the transform parameters for the tile, and
is the transform for the tile applied to the coordinate x.
Figure imgf000028_0004
[0101] In order to estimate the transforms from shifts at specific locations between tiles, an assumption is made that it is possible to measure the shifts at several locations between all adjacent tiles. These shift measurements are written to denote they* shift measurement between tiles u and v. These measurements are made using shift estimation between patches. The underlying problem can now be re-stated in that it is desired to estimate the transform parameters pw for all tiles u e {1, ... , MN) that minimises the sum of square differences (SSD) between the shift measurements and the shifts with the estimated transforms over all tiles and patches, given by the equation
Figure imgf000029_0001
[0102] Equation (9) can be formulated as a standard nonlinear least-squares problem by writing the shifts in vector form ordered as a vector containing all x-adjacent shifts between all tiles first, then all y-adjacent shifts, finally all z-adjacent shifts. Note that this ordering is arbitrary, and other orderings could be used that may improve the speed of solution of this matrix :
however, the final solution will be identical.
Figure imgf000029_0002
where represents the vector of shifts at the set of quv patches between tiles u and v. The shifts are arbitrarily enumerated starting with shifts between horizontally adjacent tiles, and followed by shifts between vertically adjacent tiles. Similarly, the vector function of the points correspondences is written as,
Figure imgf000029_0003
where the transform parameters for all N=MKL tiles have been written as a vector,
Figure imgf000030_0001
[0103] This gives the vector nonlinear least squares problem which can be written in vector notation as:
Figure imgf000030_0002
[0104] The nonlinear least-squares framework setup at step 1010 can then be solved with the Gauss-Newton method at step 1040, which gives a solution to the parameters p by iteratively solving the linearised normal equations,
Figure imgf000030_0003
where the residual of the ith iteration is
Figure imgf000030_0004
, the solution vector update is
Figure imgf000030_0005
and the jacobian of the system function is given by |(p) for the transform parameter vector p.
[0105] The exact form of the Jacobian is complicated, however the arrangement of the block matrix can be understood by noting that each shift,
Figure imgf000030_0006
, is only dependent upon the transform parameters of the adjacent tiles, pw and pv and therefore the matrix is relatively sparse. The terms in the Jacobian may be generated using a computational algebra package, such as MathematicaT M.
[0106] The Jacobian can further be written as a block matrix with reference to the smaller Jacobians of the individual shift estimates
Figure imgf000030_0007
. The blocks are represented by the Jacobian of , as defined in Equation (8) as
Figure imgf000031_0001
[0107] As seen in Fig. 10, two optional steps 1020 and 1030 may be applied prior to solving the least squares framework at step 1040 to improve the estimation performance. The above least squares framework may suffer from poor matrix conditioning, where shift estimation errors are amplified. One way of addressing this problem is to apply regularisation at step 1020. In particular, regularisation is necessary for the solution of the projective transform estimation in the case that the patches are vertically and horizontally aligned. In addition, regularisation is useful when robust estimation is used and large numbers of measurements are removed. In this case regularisation will select the transform that is closest to nominal in the degrees of freedom that are not defined by the measurements.
[0108] One form of regularisation that may be used is Tikhonov regularisation, also known as ridge regression. The goal of Tikhonov regularisation is to minimise the sum of squared differences of Equation (13) for the parameter vector estimate
Figure imgf000031_0003
subject to the constraint,
Figure imgf000031_0002
where L is the regularisation matrix and preg is the vector of nominal transform parameters of the problem (i.e. the best a-priori estimate of the transform parameters in the absence of measurements). Typical choices for the regularisation matrix L are the identity matrix, in which case the constraint is on the norm of the parameter vector itself, and a finite difference matrix, in which case the constraint is on the smoothness of the parameter vector. In the present case the identity matrix is used as it is desired to find solutions close to the nominal transform parameters.
[0109] It can be shown that solving Equation (13) subject to Equation (17) is equivalent to minimising the following Lagrange multiplier problem,
Figure imgf000032_0001
where is the Tikhonov parameter. This can be solved again using the Gauss-Newton formulation which gives the linearised normal equations,
Figure imgf000032_0002
where is the current iterate of the parameter estimate
Figure imgf000032_0003
is the next iterate of the parameter estimate, and
Figure imgf000032_0004
is the current residual. The Gauss-Newton solution is solved iteratively starting from a suitable initial guess for the parameters,
Figure imgf000032_0005
typically given by the nominal solution preg. Often it is advantageous to use the Levenberg-
Marquardt method or other trust-region method to improve the global convergence of the problem.
[0110] Although correlation-based shift estimation methods perform very well for most images, even in the presence of noise, large true shifts between patches combined with bad patterns of texture lead to failure of the shift estimate. When the shift estimate fails, it can return a very large shift estimate that has no bearing on the true shift, these are outliers in the measurements.
[0.1 1 1] Also optionally, estimation of parameters can be performed in the presence of outliers with robust estimation methods at step 1030. These methods, in general, detect the outliers and either eliminate them from the problem or down-weight them in a weighted least-squares framework. [0112] -estimators are a popular class of robust estimation techniques that can easily be calculated using a weighted least-squares framework. The M-Estimator method chooses to minimize the objective function of,
Figure imgf000033_0001
where rk is the Ath element of the vector of the residuals of Equation ( 13), given by
Figure imgf000033_0002
[0113] Different estimators result for different choices of the function p. When p(x) = x2 the method is equivalent to the least-squares method, and other choices of the function^ give different characteristics tor the handling of outliers. In particular, a choice of p that is commonly used to cope with the presence of outliers, is that determined from a bi-weight function.
Figure imgf000033_0003
where η is a constant that controls the level of outlier rejection.
[0114] It can be shown that the solution of Equation (20) is equivalent to the minimum of the following iterative reweighted least-squares problem,
Figure imgf000033_0004
where the weights are given by.
Figure imgf000033_0005
[0115] This allows fast efficient calculation of the solution using standard weighted least- squares algorithms. [0116] in the case of the bi-weight function of Equation (22) the weights are given by,
Figure imgf000034_0001
[0117] Thus the M-estimator down-weights measurements that have a large deviation from the predicted measurements calculated using the current model estimate, being the current estimate of the M-estimator during iteration. The bi-weight function has the property that measurements with residuals above the cut-off value of η are completely removed from the problem. The bi- weight function is not the only function that can be used, the following function changes asymptotically from least-squares for low residuals to zero weighting for high residuals is given by,
Figure imgf000034_0002
where η can be related to the probability of outliers in the tails of the distribution.
[0118] The M-estimator method can be solved by use of a weighted iterative least-squares method, now described . The weighted regularised least-squares problem can be written as
Figure imgf000034_0003
where W is the weighting matrix. The associated linearized normal equations can be derived,
Figure imgf000034_0004
where is the current iterate of the parameter estimate
Figure imgf000034_0005
is the next iterate of the parameter estimate, and
Figure imgf000034_0006
is the current residual.
[011 ] The problem is solved using two nested iterations at step 1040. First the nonlinear problem of Equation (28) is solved to convergence using a weighting matrix set to the identity matrix, or set using the best guess of the reliability of the measurements using, for instance, correlation weights from the shift estimation. This gives an initial model parameter estimate that can be used to calculate the weighting matrix using Equation (25). The next parameter estimate is then calculated by iterating Equation (28), and so on. Further, the nonlinear iterations do not need to be solved to the same precision at each weighting iteration. Also the initial transform estimates used to update the weighting matrix can be solved to a significantly lower precision.
[0120] Step 1050 checks if the criterion for convergence of the least squares problem is reached, in which case the processing of method 1000 ends, otherwise processing returns to step 1040, for at least one farther iteration.
[0121] A method 1100, used at step 690 to determine the size of the overlap offset
Figure imgf000035_0003
will now be described in further detail below with reference to Fig. 1 1. At step 11 10 an offset direction index is calculated based on the current layer number m where
Figure imgf000035_0004
Figure imgf000035_0002
The offset direction is given by,
Figure imgf000035_0001
where Nd is the total number of offset settings. In the preferred arrangement, Nd is 5 for 5 different offsets.
[0122] At step 1120, the processor 1505 checks if the offset direction index is 1 f in which case processing moves to step 1 125, otherwise it continues to step 1130. At step 1125, the overlap offset (Δx, Ay) is set to (0,0) and processing continues at step 1 170.
[0123] Step 1130 checks if the offset direction index is 2, in which case processing moves to step 1135, otherwise it continues to step 1 140. At step 1135, the overlap offset (Δx, Ay) is set to (x0ff, 0) and processing continues at step 1 170.
[0124] Step 1 140 checks if the offset direction index is 3, in which case processing moves to step 1 145, otherwise it continues to step 1 150. At step 1 145, the overlap offset (Δx, Ay) is set to (0, y0ff) and processing continues at step 1 170.
[0125] Step 1 150 checks if the offset direction index is 4, in which case processing moves to step 1 155, otherwise it continues to step 1 160. At step 1 155, the overlap offset (Δx, Ay) is set to (-*<>//, 0) and processing continues at step 1 1 0. [0126] At step ! 160, the overlap offset (Δx, Ay) is set to (0,—y0ff) and processing continues at step 1 170. At step 1170, the overlap offset (Δx, Ay) is applied to the transverse location x, y as described above in Equation (1 ). Method 1 100 ends after step 1170.
[0127] In the preferred arrangement, x0ff is 100 pixels and y0ff is also 100 pixels.
[0128] Fig. 12 is a schematic flow diagram that illustrates a method 1200 of determining a suitable overlap offset size {x0ff or yoff) w> he used at step 690. Method 1200 is an offline technique for determining statistically a maximum distance between two regions of a particular type of specimen with sufficient biological structures for image alignment. This maximum distance may be referred to as the feature gap of the specimen, and provides from the predetermination of the overlap offset size.
[0129] A loop structure is used in the method 1200 to analyse the sparseness of the specimen with a moving window approach in which the patch candidate count is determined at each window region across an image tile to determine the maximum region with insufficient biological structures for image alignment. The processing of method 1200 begins with a tile of the specimen 102 that is for example retrieved from the HDD 1510. Optionally, the file can be downsampled at step 1210 to a lower resolution to enable faster computation. Typically a downsampling of 4 times is used, which the inventors have found increases the speed of computation widiout significantly reducing accuracy. At step 1220, a window region of the same size as the overlap 316 is selected at the left border of the tile. The number of patch candidates Pc is determined at step 1230 by counting the number of patch locations with significant alignable features in the window region. The process of determining patch candidates may be implemented by applying a Harris corner detector to the window region to generate a list of corner locations. Corner locations that have a comer strength that is greater than a predefined threshold are potential patch locat ions. The list of potential patch locations is then sorted according to their corner strength in descending order. The list of patch locations is further filtered by deleting points which are within S (minimum patch separation distance) pixels, e.g., 100 pixels, from another stronger corner. The remaining patches become the patch candidates, thus the patch count Pc. A one-dimensional (ID) profile of the patch candidate count per window region along the X direction is updated at step 1240 for a previous plane. Step 1250 then checks if there are further window regions to process, in which case processing returns to step 1220, otherwise the processing of method 1200 moves to step 1260. [0130] At step 1220, the window region is moved to the right along the X direction by a predefined amount dw. The size of dw depends whether the optional step 1210 was applied. In the preferred arrangement with a downsampling of 4 times, dw can be between 1 to 5 pixels.
[0131] At step 1260, the ID patch candidate count profile is analysed, regions along the 1 D profile where the patch candidate count Pc is below the required patch number P are identified. The region with the largest distance without sufficient alignment structure is determined, which represents the maximum feature gap between biological structures within the specimen 102. The overlap offset size xoff may be set according to this feature gap. The same method is applied in step 1260 in the Y direction to determine the overlap offset size y0ff. Method 1200 may be applied a number of times to randomly selected tiles of a given type of specimen to improve measurement accuracy.
[0132] Alternatively, the determined feature gap may be used to set offset direction as in method 1 100 of Fig. 1 1. For example, if the overlap is 100 pixels wide, the overlap offset (x0ff or y0ff) >s 100 pixels, and the feature gap is about 350 pixels, then the number of offset setting Nd is increased to 9 with the following sequence of offset settings:
Figure imgf000037_0001
[01 3] In the above sequence, the effective overlap region across the layers is about five times (500 pixels) the size as the default overlap size (100 pixels), which should cover the maximum feature gap of 350 pixels. [0134] Alternatively, the determined feature gap may be used to set offset direction and size as in method 1 100 of Fig. 1 1. For example, if the overlap is 100 pixels wide, and the feature gap is about 350 pixels, then the number of offset setting Nd may be set to 4 and the overlap offset (xoff or yoff) set to 350 pixels, with the following sequence of offset settings:
Figure imgf000038_0001
Second implementation
[0135] In the first implementation, overlap offset capture 530 is achieved using stage movements alone. An alternative implementation is to use not only the microscope stage 1 10 but also a dedicated sensor arrangement with multiple sensors that can capture multiple offset tiles at the same time. Fig. 13 provides an illustration of a microscope 1300 with a suitable sensor arrangement that may be used at step 530 of the method 500. This microscope 1300 includes a stage 1 10, on which a specimen 1320 is placed. Light is transmitted through the specimen 1320, then through one or more lenses (1330), split by beam splitters 1340 and 1345, and focused onto multiple sensors 1350, 1360 and 1390. An illustrative light path 1370 through the centre of the lens 1330 is shown, which is split into two ( 1372 and 1374). The light path 1372 is further split into two (1376 and 1378). In this arrangement the sensors 1350, 1360 and 1390 are arranged to focus on three different depths or three adjacent layers of the specimen 1 20. Furthermore, the capture field of view of the sensor 1350 is offset Δ* , shown as 1380, in the X direction relative to the field of view of the sensor 1360. As a result, the specific capture locations of the sensors 1350 and 1360 are (x* + Ax,yj,zk) and *i> /.¾+i), respectively.
[0136] Similarly, the capture field of view of the sensor 1390 is offset -Ax, shown at 1395, in the X direction relative to the field of view of the sensor 1360. As a result, the actual capture locations of the sensors 1590 and 1560 are - Ax,yj,zk+2) and (*..}'/, Zfc+iX respectively.
[0137] With this multi-sensor arrangement, the method 1100 of Fig. 1 1 may be simplified because overlap offset in the X direction is built into the sensor arrangement, thus only stage movements in the Y direction are required. A major benefit of this approach is the improvement in capture speed with two image tiles being capture at any one time.
Third Implementation
[0138] In another implementation, the sensor arrangement in Fig. 13 can be extended to include an offset in the Y direction such that at each transverse location the stage 1310 with the specimen 1 20 in the field of view of the sensors, three image tiles are captured at (xt +
Figure imgf000039_0001
respectively. This way both overlap offsets in the X and Y directions are built into the sensor arrangement, thus removing or minimising the need to add small offsets during image capturing. Moreover, with additional sensors capturing speed is further increased. Note that this implementation is not illustrated in Fig. 13 as the Y direction is perpendicular to the representation of the microscope 1300.
[0139] It should be noted that the above descriptions on multi-sensor arrangements serve to illustrate how overlap offsets can be built into a capturing system, however it will be apparent to those skilled in the art that alternative ways of arranging multiple sensors can be practised without departing from the scope and spirit of the use of multi-sensors for performing overlap offset capturing.
INDUSTRIAL APPLICABILITY
[0140] The arrangements described are applicable to the computer and data processing industries, and particularly for the capture of images in digital microscopy. For example, whilst the arrangements described afford advantages for imaging biological specimens with sparsely separated biological structures, the arrangements are generally applicable to imaging, and particularly image stitching of, microscope images.
[0141 ] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
[0142] (Australia Only) In the context of this speci ication, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of. Variations of the word "comprising", such as "comprise" and "comprises have correspondingly varied meanings.

Claims

CLAIMS:
1. A method of registering a plurality of images of a three dimensional specimen captured by a microscope, said method comprising the steps of:
capturing a first set of images on a first capture plane of the specimen, said first set including two images having a first area of overlap;
capturing a second set of images on a second capture plane of the specimen, the second capture plane being substantially parallel to the first capture plane, said second set including two images having a second area of overlap that is offset from the first area of overlap in a direction along the capture planes so as to include in the second area of overlap at least one first alignable image feature not present in the first area of overlap;
aligning the two images in the second set using the at least one first alignable image feature in the second area of overlap; and
aligning at least the two images in the first set using the alignment of the two images of the second set and second alignable image features present in each of the two images in the first set and correspondingly present in each of the two images in the second set.
2. A method according to claim 1 , wherein the area of overlap is formed along an edge of each of the captured images.
3. A method according to claim 1 wherein a centre point of overlap is shifted between the different capture planes.
4. A method according to claim I , wherein the first area of overlap does not overlap with the second area of overlap.
5. A method according to claim 1 , further comprising aligning a first specific image of the first set in the first capture plane with a second specific image of the second set in the second capture plane by considering at least one third alignable feature not present the overlap regions of the first and second specific images.
6. A method according to claim 5 wherein the at least one third alignable feature is one of the second alignable features and the alignment of the specific images forms part of the alignment of the two images of the first set.
7. A method according to claim 6, further comprising aligning a third specific image of the first set in the first capture plane with a fourth specific image in the second set in the second capture plane by considering at least one fourth alignable feature not present the overlap regions of the third and fourth specific images, wherein
the at least one fourth alignable feature is one of the second alignable features, the first and third specific images comprise the two images having the first area of overlap in the first capture plane,
the second and fourth specific images comprise the two images having the second area of overlap in the second capture plane, and
the alignment of the two images of the first set derives from the alignment of the first and second specific images, the alignment of the second and fourth specific images, and the alignment of the third and fourth specific images.
8. A method according to claim , wherein the offset of the second area of overlap is determined based on a distribution of patches in one dimension in images along a previous capture plane.
9. A non-transitory computer readable storage medium having a program recorded thereon, the program being executable by a processor to register a plurality of images of a three dimensional specimen captured by a microscope, said program comprising the steps of:
code for capturing a first set of images on a first capture plane of the specimen, said first set including two images having a first area of overlap;
code for capturing a second set of images on a second capture plane of the specimen, the second capture plane being substantially parallel to the first capnire plane, said second set including two images having a second area of overlap that is offset from the first area of overlap in a direction along the capture planes so as to include in the second area of overlap at least one first alignable image feature not present in the first area of overlap;
code for aligning the two images in the second set using the at least one first alignable image feature in the second area of overlap; and
code for aligning at least the two images in the first set using the alignment of the two images of the second set and second alignable image features present in each of the two images in the first set and correspondingly present in each of the two images in the second set.
10. A computer readable storage medium according to claim 9, wherein the area of overlap is formed along an edge of each of the captured images.
11. A computer readable storage medium according to claim 9 wherein a centre point of overlap is shifted between the different capture planes.
12. A computer readable storage medium according to claim 9, wherein the first area of overlap does not overlap with the second area of overlap.
13. A computer readable storage medium according to claim 9, further comprising code for aligning a first specific image of the first set in the first capture plane with a second specific image of the second set in the second capture plane by considering at least one third aiignable feature not present the overlap regions of the first and second specific images.
14. A computer readable storage medium according to claim 13 wherein the at least one third aiignable feature is one of the second aiignable features and the alignment of the specific images forms part of the alignment of the two images of the first set.
15. A computer readable storage medium according to claim 14, further comprising aligning a third specific image of the first set in the first capture plane with a fourth specific image in the second set in the second capture plane by considering at least one fourth aiignable feature not present the overlap regions of the third and fourth specific images, wherein
the at least one fourth aiignable feature is one of the second aiignable features, the first and third specific images comprise the two images having the first area of overlap in the first capture plane,
the second and fourth specific images comprise the two images having the second area of overlap in the second capture plane, and
the alignment of the two images of the first set derives from the alignment of the first and second specific images, the alignment of the second and fourth specific images, and the alignment of the third and fourth specific images.
16. A computer readable storage medium according to claim 9, wherein the offset of the second area of overlap is determined based on a distribution of patches in one dimension in images along a previous capture plane.
17. A microscope image registration system comprising:
a microscope having a controllable stage;
an imaging sensor configured to capture images of a three-dimensional specimen mounted to the stage;
a processor associated with a memory, the processor being coupled to the imaging sensor and the stage; and
a program stored in the memory and executable by the processor to register a plurality of images of the specimen captured by the imaging sensor, said program comprising:
code for capturing a first set of images on a first capture plane of the specimen, said first set including two images having a first area of overlap;
code for capturing a second set of images on a second capture plane of the specimen, the second capture plane being substantially parallel to the first capture plane, said second set including two images having a second area of overlap that is offset from the first area of overlap in a direction along the capture planes so as to include in the second area of overlap at least one first alignable image feature not present in the first area of overlap:
code tor aligning the two images in the second set using the at least one first alignable image feature in the second area of overlap; and
code for aligning at least the two images in the first set using the alignment of the two images of the second set and second alignable image features present in each of the two images in the first set and correspondingly present in each of the two images in the second set.
18. The system according to claim 17, wherein the area of overlap is formed along an edge of each of the captured images, a centre point of overlap is shifted between the different capture planes, and wherein the first area of overlap does not overlap with the second area of overlap.
1 . The system according to claim 18, the program further comprising:
code for aligning a first specific image of the first set in the first capture plane with a second specific image of the second set in the second capture plane by considering at least one third alignable feature not present the overlap regions of the first and second specific images, wherein the at least one third alignable feature is one of the second alignable features and the alignment of the specific images forms part of the alignment of the two images of the first set.
20. The system according to claim 19, the program further comprising:
code for aligning a third specific image of the first set in the first capture plane with a fourth specific image in the second set in the second capture plane by considering at least one fourth alignable feature not present the overlap regions of the third and fourth specific images, wherein
the at least one fourth alignable feature is one of the second alignable features, the first and third specific images comprise the two images having the first area of overlap in the first capture plane,
the second and fourth specific images comprise the two images having the second area of overlap in the second capture plane,
the alignment of the two images of the first set derives from the alignment of the first and second specific images, the alignment of the second and fourth specific images, and the alignment of the third and fourth specific images, and
the offset of the second area of overlap is determined based on a distribution of patches in one dimension in images along a previous capture plane.
PCT/AU2014/001148 2013-12-23 2014-12-19 Overlapped layers in 3d capture WO2015095912A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2013273832A AU2013273832B2 (en) 2013-12-23 2013-12-23 Overlapped layers in 3D capture
AU2013273832 2013-12-23

Publications (2)

Publication Number Publication Date
WO2015095912A1 WO2015095912A1 (en) 2015-07-02
WO2015095912A9 true WO2015095912A9 (en) 2015-07-30

Family

ID=53477208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2014/001148 WO2015095912A1 (en) 2013-12-23 2014-12-19 Overlapped layers in 3d capture

Country Status (2)

Country Link
AU (1) AU2013273832B2 (en)
WO (1) WO2015095912A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4375926A1 (en) * 2022-11-28 2024-05-29 Lunaphore Technologies SA Digital image processing system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1428169B1 (en) * 2002-02-22 2017-01-18 Olympus America Inc. Focusable virtual microscopy apparatus and method
US20090091566A1 (en) * 2007-10-05 2009-04-09 Turney Stephen G System and methods for thick specimen imaging using a microscope based tissue sectioning device
JP4558047B2 (en) * 2008-01-23 2010-10-06 オリンパス株式会社 Microscope system, image generation method, and program
EP2715321A4 (en) * 2011-05-25 2014-10-29 Huron Technologies Internat Inc 3d pathology slide scanner

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4375926A1 (en) * 2022-11-28 2024-05-29 Lunaphore Technologies SA Digital image processing system
WO2024115054A1 (en) * 2022-11-28 2024-06-06 Lunaphore Technologies Sa Digital image processing system

Also Published As

Publication number Publication date
AU2013273832B2 (en) 2016-02-04
AU2013273832A1 (en) 2015-07-09
WO2015095912A1 (en) 2015-07-02

Similar Documents

Publication Publication Date Title
US7693348B2 (en) Method of registering and aligning multiple images
US9607384B2 (en) Optimal patch ranking for coordinate transform estimation of microscope images from sparse patch shift estimates
JP6960980B2 (en) Image-based tray alignment and tube slot positioning in visual systems
US9124873B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
JP5580164B2 (en) Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program
US8600192B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
CN108074267B (en) Intersection point detection device and method, camera correction system and method, and recording medium
JP6842039B2 (en) Camera position and orientation estimator, method and program
US20160282598A1 (en) 3D Microscope Calibration
US9253449B2 (en) Mosaic picture generation
KR102608956B1 (en) A method for rectifying a sequence of stereo images and a system thereof
Loing et al. Virtual training for a real application: Accurate object-robot relative localization without calibration
EP3903229A1 (en) System and method for the recognition of geometric shapes
AU2013273832B2 (en) Overlapped layers in 3D capture
JP6906177B2 (en) Intersection detection device, camera calibration system, intersection detection method, camera calibration method, program and recording medium
CN108564626A (en) Method and apparatus for determining the relative attitude angle being installed between the camera of acquisition entity
Zhu et al. Efficient stitching method of tiled scanned microelectronic images
JP2018032144A (en) Image processor, image processing method and program
JP2006003276A (en) Three dimensional geometry measurement system
AU2013273789A1 (en) Thickness estimation for Microscopy
JP4196784B2 (en) Camera position measuring apparatus and method, and camera position control method
US20220366531A1 (en) Method and apparatus with image display
Tramberger Robot-based 3D reconstruction using Structure from Motion-Extending the Inline Computational Imaging System to a Robotic Arm
WO2024115054A1 (en) Digital image processing system
AU2018208713A1 (en) System and method for calibrating a projection system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14874245

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14874245

Country of ref document: EP

Kind code of ref document: A1