GB2412803A - Image segmentation using local and remote pixel value comparison - Google Patents

Image segmentation using local and remote pixel value comparison Download PDF

Info

Publication number
GB2412803A
GB2412803A GB0407121A GB0407121A GB2412803A GB 2412803 A GB2412803 A GB 2412803A GB 0407121 A GB0407121 A GB 0407121A GB 0407121 A GB0407121 A GB 0407121A GB 2412803 A GB2412803 A GB 2412803A
Authority
GB
United Kingdom
Prior art keywords
image
pixel
intensity
pixels
intensity characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0407121A
Other versions
GB0407121D0 (en
Inventor
Yuny Alexandrov
J Cybuch
Bohdan Soltys
Louis Dagenais
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GE Healthcare Niagara Inc
Original Assignee
Amersham Biosciences Niagara Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amersham Biosciences Niagara Inc filed Critical Amersham Biosciences Niagara Inc
Priority to GB0407121A priority Critical patent/GB2412803A/en
Publication of GB0407121D0 publication Critical patent/GB0407121D0/en
Publication of GB2412803A publication Critical patent/GB2412803A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/693Acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

A method of scanning biological specimens, the method comprising processing an input image (U) to produce an output image (N(U)), the image comprising regions that represent biological structures. The method comprises segmentation of the image to identify such regions therein. The input image comprises at least one first pixel (a), neighbouring pixels (a'), and distant pixels (a''), the neighbouring pixels surrounding the first pixel, and each pixel having an intensity characteristic. The processing comprises categorizing the first pixel according to the intensity characteristic, wherein the first pixel is identified to be within one of the regions if the first pixel has a first intensity characteristic, and the neighbouring pixels also have the first intensity characteristic, and the distant pixels have a second intensity characteristic, which is different to said first intensity characteristic. The predetermined characteristic of the regions may be the size of the region, and the processing of the image may be performed on an area-by-area basis, by applying a morphological transform to each area.

Description

24 1 2803 Method of Scanning Biological Specimens
Field of the Invention
The invention relates to a method of scanning biological specimens, and identifying regions in an image that represent biological structures by image segmentation. The invention further relates to computer software and apparatus adapted to carry out such a method.
Background of the Invention
Fluorescent microscopy is a versatile and widespread technique in molecular and cellular biology. Fluorescence microscopy is based on the property of some atoms and molecules to absorb light of a certain wavelength and to subsequently re-emit the energy at a longer wavelength. Where a feature of interest is not naturally fluorescent, that feature may be labeled with a fluorophore, which will absorb and re-emit light at a known wavelength. The fluorophore may be the gene product of a tissue-, cell- or organelle-specific transgene, or it may be added as an exogenous compound to a cell suspension.
In contrast to other modes of optical microscopy that are based on macroscopic specimen features, such as phase gradients, light absorption, and birefringence, fluorescence microscopy is capable of imaging the distribution of a single molecule based solely on the properties of fluorescence emission.
Using fluorescence microscopy, the precise location of intracellular components labeled with specific fluorophores can be monitored, as well as their associated diffusion coefficients, transport characteristics, and interactions with other biomolecules. In addition, any response in fluorescence to localized environmental variables enables the investigation of pH, viscosity, refractive index, ionic concentrations, membrane potential, and solvent polarity in living cells and tissues. Fluorescence microscopy is therefore a powerful tool in cellular biology.
Image segmentation is a form of image processing that distinguishes objects from their background, which may be used in conjunction with fluorescence microscopy, in order to identify structures of interest. Image segmentation may be performed on the basis of characteristic object size, where the size range is known, or modelled. For example, if the size of the nuclei in a cell sample is within a known range, segmentation may be performed to identify the nuclei. Other cellular objects of characteristic size include organelles such as mitochondria, cytoplasmic granules, and vesicles within endo- and exocytic pathways. Segmentation on the basis of size may also be performed in order to detect whole cells, or structures within cells.
Segmentation may be performed using a number of mathematical methods, which are usually based upon the intensity values of areas of the IS image. The watershed method applies an algorithm that locates pixels at the midpoints between image features by treating the image as a topographic surface, and the pixel intensity values as topographic elevations. The surface is flooded' from its minima (or pre-flooded and drained from its maxima) and watersheds' are implemented to prevent the merging of the waters coming from different sources, to partition the image into two different topographical types: the catchment basins and the watershed lines. The catchment basins correspond to the areas of the image that have a homogenous intensity, while the watershed lines delimit the borders between areas of differing intensity.
However, the watershed method has been shown to be relatively computationally expensive, and difficult to implement. A method of segmentation that can run with sufficient speed and accuracy on a desktop PC is preferred for laboratory use. Furthermore, it is most suitable for use when the image contains objects of interest that are touching, and is not always suitable for segmentation on the basis of object size.
Top-hat transforms are also used in segmentation. Top-hat transforms are used in order to segment objects of a pre-defined size, which may be touching, and can be implemented more efficiently than watershed algorithms.
The top hat transform may be defined as the difference between an original, unaltered image and a generalized version of the image. This generalized version is the image where the details of interest are smoothed or suppressed as a result of special processing: TopHat(U) = U Generalised(U) The image may be generalized, for example, by adding some averaging procedure. The known conventional method of averaging use the convolution with normalised center-symmetric kernels, such as Gaussian or box-type averaging.
Alternatively, the generalization of the image may be attained by applying morphological smoothing procedures. As an example of such morphological smoothing, one could draw on the elementary grayscale morphological smoothing operations - opening and closing. More complicated morphological smoothing procedures apply these two elementary grayscale operations sequentially with different kernel sizes (known as ASF, Alternating Sequential Filters).
The following equation represents a simple top-hat transform (TH): TH(U; , k) = U-<U> kit: where U is the image, k is a tuning coefficient, iS the size of the transform, and <U> iS 'the image U averaged in the -sized vicinity'.
However, the top-hat transform described above is not ideal, and it has been shown that it does not yield the best results in terms of object detection when applied to under- or over-populated images. Furthermore, following the application of a segmentation method to produce a segmented image, the segmented image is often converted into a binary output image by thresholding.
Thresholding involves applying a threshold value to each pixel - if the intensity of the pixel is below the threshold, the pixel is converted into a black pixel, while if the intensity is above the threshold, it is converted into a white pixel.
Segmented objects therefore need to be clearly delimited from the background, in order for segmented detail to be preserved after thresholding. The conventional top-hat method as described above does not always yield an output image which is sufficiently distinctly segmented, and therefore the output image may not be suitable for thresholding.
In addition, it is desirable to use image processing methods that are insensitive to artefacts that affect the intensity of the pixels in an image. All forms of microscopy have characteristic artefacts. Fluorescent microscopy images may suffer from multiplicative shading, wherein the excitation illumination of the image is not uniform due to mechanical or hardware inaccuracies. The illumination therefore varies across the image, affecting the intensity.
It is also desirable to be able to detect objects of several different types in an image. For example, it may be desirable to be able to identify both nuclei and organelles in a segmented output image. It may be necessary to track organelles of differing sizes, such as, for example, vesicles budding off from a parent organelle, moving through the endocytic pathway, and then fusing with other membrane-bound organelles. Also, it may be desirable to be able to detect both cells, and components within cells, simultaneously.
Summary of the Invention
In accordance with one aspect of the present invention, there is provided a method of scanning biological specimens, the method comprising processing an input image (U) to produce an output image (N(U)), the image comprising regions that represent biological structures, the method comprising segmentation of the image to identify such regions therein, the input image comprising at least one first pixel (a), neighbouring pixels (a '), and distant pixels (a '], the ncighbouring pixels surrounding the first pixel, each pixel having an intensity characteristic, wherein the processing comprises categorizing pixels of the image using a segmentation algorithm having selected categorizing parameters, wherein the first pixel is identified to be within one of said regions if: the first pixel has a first intensity characteristic, and the neighbouring pixels also have said first intensity characteristic, and the distant pixels have a second intensity characteristic, which is different to said first intensity characteristic.
A method according to an embodiment of the present invention provides an image segmentation method which implements a heuristic. A pixel is identified to be within one of the regions of interest if the first pixel has a first intensity characteristic, and the neighbouring pixels also have said first intensity characteristic, and the distant pixels have a second intensity characteristic, which is different to said first intensity characteristic.
A method according to an embodiment of the present invention provides a computationally efficient and versatile segmentation method. Biological objects of interest are distinguished from the background by analyzing the intensity characteristics of a first pixel, its near neighbours, and pixels which are distant from it. Detection of biological structures of interest, represented by regions in the image, may be provided based on a variety of characteristics of the structures, thereby allowing segmentation of regions on the basis of a variety of different cellular criteria such as size, shape, or degree of fluorescence. The distance between the first pixel and its neighbours, and between the first pixel and the distant pixels, may be varied, in order to provide segmentation of regions of varying size. Furthermore, improvement of the signal to noise ratio is also provided, because isolated pixels that vary in intensity characteristic from their surroundings are not identified as part of a region. Therefore, regions of interest are identified, while background noise is suppressed.
A method according to an embodiment of the invention provides a method of processing images that is insensitive to intensity scaling. Intensity scaling quantifies the range of intensity values that are found in an image.
Images may vary in the range of intensities that may be present, due to the concentration of objects of interest in the image, the background, the fluorophores present, the illumination, and other factors. A method according to an embodiment of the invention is insensitive to the intensity scaling because a first pixel is categorized on the basis of an intensity characteristic relative to its neighbours and distant pixels. The method can be adapted to any range of intensity values. Therefore, similar regions of interest (e.g. nuclei) may be segmented from images which differ in intensity scaling by a method according to an embodiment of the present invention.
Furthermore, segmentation in accordance with a method according to an embodiment of the invention is less affected by spatial variation in intensity, such as multiplicative shading. A method according to the present invention assesses pixel intensity characteristics on a relative, not absolute, basis, within a defined locality. The method therefore adapts in terms of pixel-to-pixel characteristic comparison as it is applied across an image, irrelevant of any global variation in intensity caused by multiplicative shading. Therefore, similar regions of interest may be segmented from an image irrespective of variation in intensity across the image as a whole.
A method according to an embodiment of the invention is capable of mapping intensity values into a dimensionless domain, by comparing the intensity characteristics of a first pixel with neighbouring pixels and distant pixels. Therefore, similar regions of interest (ea. nuclei, vesicles) in areas of differing background intensity, or even different images, may be described by a similar dimensionless intensity characteristics, even if the actual intensity value of regions relative to each other varies due to differences in illumination, artefacts of fluorophore distribution, etc. In a preferred embodiment of the present invention, the method comprises selecting a set of neighbouring pixels to define a size characteristic of the regions being identified. The set of the neighbouring pixels analysed can define the size of the regions which are segmented. Many cell types, and subcellular structures, can be characterized by their size. Therefore, cells or subcellular structures of a known size, or which characteristically fall into a known size range, can be segmented from an image which may include a variety of other objects of differing size. The biological structures represented by a region may comprise a nucleus, or another cytoplasmic structure.
Processing of the image in a method according to an embodiment of the present invention may be performed on an area-by-area basis, by applying a transform to each area. An area may be a set of neighbouring pixels.
Accordingly, the input image (U) may be processed to produce the output image (N(U)) in accordance with NTH(U; | k;K) = k -1 where K > k > 1 where MU> is the transform, iS a parameter related to the size within a known range of a region, and K and k are tuning coefficients. It can be seen that intensity values are mapped into a dimensionless domain by dividing U<U>, S by [<U>K ]2, providing the advantage of a dimensionless intensity value, as described above.
In an embodiment of the present invention, the first intensity characteristic and the second intensity characteristic are represented by different intensity values. A pixel may therefore be identified as being within a region representing a biological structure based on the intensity values of the pixel, its neighbours, and pixels distant from it. In fluorescence microscopy, a region representing a biological structure will have a different intensity value to the surrounding background area, and therefore the region can be detected by the comparison of pixel intensity values according to an embodiment of the 1 S invention.
The value representing the first intensity characteristic may be higher than the value representing the second intensity characteristic. Alternatively, the value representing the second intensity characteristic may be higher than the value representing the first intensity characteristic. Therefore, both positive and negative images may be processed.
After processing of an input image (U) to produce an output image (N(U)) according to an embodiment of the invention, the image may be further processed by altering the intensity characteristics of one or more pixels according to the categorization of the first pixel. For example, if the first pixel is identified to be within one of the regions representing a biological structure of interest, that pixel and its neighbours may be increased in intensity, while pixels which are identified to be outside the regions may be suppressed by being decreased in intensity, in order to produce an output image where the regions representing biological structures are immediately apparent to a human user.
In a further embodiment of the invention, the input image (U) may be processed to produce an output image (N(U)) at least twice, wherein the value of at least one categorizing parameter of the transform is varied before each step of processing, such that a plurality of output images (N(U)X) is generated. If the categorizing parameter value that is varied relates to the size of regions that are segmented, then segmentation of regions of differing sizes from the same input image is provided. Segmentation of objects of differing sizes from the same input image is known as multi-scale segmentation. A variety of regions may therefore be identified, so that (for example) two or more different organelles, differing in size, may be segmented from the same input image (U).
Alternatively, similar regions which vary in a characteristic may be identified.
For example, an organelle that varies in size during the cell cycle may be identified by varying a categorizing parameter related to organelle size, within the known size range of the organelle, before each step of processing the image.
Furthermore, a synthesized image may be produced from the plurality of output images N(U)X according to a pixel-by-pixel synthesizing process. The output images are combined to produce a single image, showing both segmented regions simultaneously. For example, in an embodiment where the categorizing parameters relate to the size of an identified region, both nuclei and vesicles, which differ in size, can be segmented and shown simultaneously in one image.
The selection of the value of the categorizing parameter for each step of processing the input image (U) to produce an output image (N(U)) may be performed in accordance with a scaling equation, in order to produce a plurality of output images N(U)X where the size of the regions varies predictably.
Therefore, a known range of categorizing parameter values may be applied to the image, in order to determine which particular parameter value is operable to segment biological structures of interest. This will also allow the detection and segmentation of regions representing structures that may vary continuously in size ea. budding vesicles.
Further features and advantages of the invention will become apparent from the following description of preferred embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure I is a flow diagram showing a method of scanning a biological specimen according to an embodiment of the invention.
Figure 2 shows a method of image processing according to an embodiment of the invention in more detail.
Figure 3 is a flow diagram showing a method of scanning a biological specimen according to a second embodiment of the invention.
Figure 4 is a schematic view of an embodiment of a fluorescence microscope used to image samples in accordance with the present invention.
Figure 5 is a side view of the two beam autofocus system.
Figures 6A, 6B and 6C illustrate a rectangular CCD camera and readout register.
Figure 7 shows a schematic illustration of data processing components of a system arranged in accordance with the invention.
Detailed Description of the Invention
The present invention provides a potentially automated, rapid and efficient method for detecting a wide variety of biological structures which are marked with one or more fluorescent markers. Several markers may be used in conjunction. A method according to an embodiment of the present invention can used in cellular assays conducted on chemical compounds or any molecule of biological interest, including but not limited to drug candidates, such as those found in combinatorial libraries, allowing high throughput screening of chemical compounds of biological interest. The present invention is useful for identifying a wide variety of biological structures. A method according to the present invention is also particularly useful for the analysis of nerve cells.
A method according to an embodiment of the present invention may make use of any known fluorophore or fluorescent label including but not limited to fluorescein, rhodamine, Texas Red, Amersham Corp. stains Cy3, Cy5, Cy5. 5 and Cy7, DRAQ5_, Hoechst's nuclear stains and Coumarin stains. (See Haugland, R.P., Handbook of Fluorescent Probes and Research Chemicals 6'h Ed., 1996, Molecular Probes, Inc., Eugene, Oregon.) Alternatively, in assays in which the same cell population is imaged and analysed a number of times during a time course study, a non-toxic nuclear marked may be used. Such a non-toxic marker may be in the form of an NLS-fluorescent protein fusion. For example, the Clontech_ pHcRedl-Nuc vector, when transfected into a cell line, produces a red fluorescence signal in the nucleus. Alternatively, a green signal may be produced by the transfection of a vector including the GFP (green fluorescent protein) gene.
Figure I is a flow diagram showing a method of scanning a biological specimen according to an embodiment of the invention. The input image 1 is a digital image derived from fluorescence microscopy. Image capture will be described in more detail below. The input image l is analysed in step 4 to determine the intensity characteristics of the pixels that make up the input image 1, before any processing takes place. Intensity is a measure of the 'brightness' of a pixel; for example, a dark area of an image (for example, a un-illuminated background, or a dense, non- fluorescent cytoplasmic region) has a low intensity, while a lighter area (for example, a region marked with a strong fluorophore) has a high intensity.
At step 8, a transform is applied to the image 1. The transform consists of a top-hat transform, of the form NTH(U; I k; K) = k]2 - 1, where K > k > 1 where U is the image <U> is the transform, is a parameter related to the size within a known range of a region, and K and k are tuning coefficients. is the categorizing parameter of the transform related to the 'size' of a region representing a biological structure of interest. The transform is operable to segment regions of the size according to the value of E. This transform implements a pixel comparison heuristic such that a pixel is identified as being part of a region representing a structure of interest if it is of first intensity value, and pixels that are close to it are of a first intensity value, and its far surroundings are of a second intensity value. In a preferred embodiment the first intensity value is higher than the second intensity value, so that pixels representing areas marked with a fluorophore are identified as being in regions representing biological structures of interest.
Application of the transform to the image results in a segmented image 12. The segmented image consists of regions made up of pixels which have been segmented from the background according to the above transform. The segmented image may take the form of a marked-up bitmap of the image, or the pixels which make up the regions representing biological structures may be marked up, or noted by their coordinates in a look-up table. Alternatively, the pixels which make up regions representing biological structures may be stored l 0 using standard shape description formats, such as run length code.
The segmented image is then subject to pre-output processing in step 14.
Pre-output processing may comprise altering the intensity values of pixels that make up the segmented image in order to show the segmented regions of interest to a human user more clearly. Intensity values of pixels which have been identified as being within a region representing a biological structure are increased, while all other pixels are reduced in intensity, in order to highlight the regions which represent the biological structures of interest. Alternatively, if the image is a negative, pixels identified as being part of such a region maybe decreased in intensity.
Altering the intensity values of pixels during pre-output processing may further involve averaging pixel intensity values within the regions, or in the background, in order to clearly delineate the regions, or suppress noise in the background. Alternatively, noise suppression may take place before application of the transform to the image (between steps 4 and 8.) Furthermore, pre-output processing may comprise outlining. Outlining consists of applying a thin, visible border to the image to delineate the
segmented regions from the background.
Pre-output processing may further comprise a step of thresholding. As described above, thresholding is a method of converting a colour image to a black and white image that comprises converting each pixel into a black or white image based on its intensity value; pixels below a certain intensity threshold are converted into black pixels, while pixels above the threshold are converted into white pixels. A negative image may be produced by reversing the conversion based on the threshold. Once the image has been converted into a black and white image, the black and white image may be scanned to identify the regions representing biological structures. Pixels within a region may be identified by the 4- connected criteria i.e. that all pixels within the region are in contact either horizontally or vertically. Pre-output processing results in an output image 16.
The parameters of the transform above may be adjusted for optimal segmentation of regions of interest. It has been shown that the transform above is very effective in segmenting regions of a variety of differing sizes and shapes with the following combination of parameters: k = 1.5, K = 3, where varies according to the size of the biological objects of interest. Furthermore, the transform will process a 1000 x 1000 pixel image and return a segmented image in around 0.2 seconds when run on a desktop PC with a 2GHz processor. The transform can therefore be implemented, and will process images rapidly, on readily available and inexpensive computing equipment.
A method of image processing according to an embodiment of the invention is shown in more detail in Figure 2. Figure 2 shows a grid of pixels in a portion of a digital image derived from fluorescence microscopy. The image is positive i.e. biological structures of interest labeled with a fluorophore are higher in intensity than the background. Each pixel has an integer intensity value on a scale of 1 to 10, 1 being the lowest intensity and 10 being the highest.
A transform according to the present invention is applied to the portion of the image shown. The transform takes the form of a mask. The size of the transform as applied to the image is 5x5 pixels.
The transform is firstly centred over the pixel a. Pixel a has an intensity value of 9. It is surrounded by immediately neighbouring pixels (the layer comprising pixel a of a similarly high intensity value. The distant pixels (the layer comprising pixel a') at the edge of the area covered by the transform are of low intensity values (1 and 2). Pixel a, and, by extension, the neighbouring pixels surrounding it, is therefore identified as being part of a region representing a biological structure. An entry noting the identification is made in l a look-up table, locating the pixel a according to its co-ordinates in the x and y axes.
In contrast, pixels b and c are not identified as being part of a region representing a biological structure. The intensity of pixel b (2) is similar to that S of the pixels immediately surrounding it (for example pixel b); however, the outer layer of pixels (including pixel b also has a similar intensity to the pixel b and its immediately surrounding layer of pixels. The intensity of this area is therefore homogenous. The area centred on pixel b may therefore be categorized as background, as it does not have the distribution of intensity characteristics of a region representing a biological structure.
In contrast, the intensity of pixel c (8) is different to that of its immediately surrounding pixels (including pixel c') and pixels distant from it (including pixel c']. Although pixel c is of relatively high intensity compared to the pixels making up the 5x5 area centred on pixel b, the difference in intensity between pixel c and its nearest neighbours suggests that it is not a region representing a biological structure, but rather a random background effect, such as a stray spike of fluorophore concentration.
It can be seen, therefore, that a transform according to an embodiment of the present invention provides size-tuned detection (according to the size of the transform parameter ) of regions representing biological structures of known size, which distinguishes such regions from background area, and from noise made up of individual 'spikes' of high intensity.
A minimum intensity value for a pixel to be identified as being part of a region that represents a biological structure may be set. Alternatively, no minimum intensity value may be set, and each pixel may be compared with its nearest neighbours and distant pixels on purely relative grounds according to an embodiment of the invention i.e. a low intensity valuepixel will still be classified as being part of a region representing a biological structure if it is of similar intensity to its nearest neighbours and differing intensity to pixels distant from it. In the case of a negative image of Ouorescently labeled biological material, regions representing biological structures which have been labeled with the fluorophore will have a lower intensity than unlabelled regions.
If it is necessary or desirable to segment biological structures of differing sizes, multi-scale segmentation may be implemented. A second embodiment of the invention which implements multi-scale segmentation is shown in Figure 3.
This second embodiment incorporates the method described with reference to Figure 1, and the same integers are therefore incorporated in Figure 3 by way of reference.
The method is performed as described with reference to Figure 1, up to the production of the segmented image 12. At step 20, the multi-scale segmentation method begins. The size parameter of the parameter is varied to a second value in step 24, and the transform is re-applied to segmented image 12. If the size parameter e is varied, the transform will be operable to analyze areas of a different size from the regions analyzed in the first application of the transform in step 8. Therefore, regions which represent biological structures of a second, differing size will be segmented in the second application of the transform to the image.
The selection of the value of the categorizing parameter for each step of processing the input image (U) to produce an output image (N(U)) may be performed in accordance with a scaling equation, in order to produce a plurality of output images N(U)X where the size of the regions varies predictably. The equation may be arranged to provide a range of values which are uniformly, exponentially or inverse exponentially distributed. Any number of scales may be employed, limited only by the available processing power. In a preferred embodiment, the range of values of the categorizing parameter may vary exponentially, in order to provide a range or values where most of the values are smaller. Minimum and maximum values of the categorizing parameter, related to the sizes of the biological structures of interest in an image to be processed, are also set. A pixel may be identified as being part of a region of interest if it is identified as such in a segmentation process performed at any scale.
Alternatively, a different segmented image 12 may be derived each time the transform is applied, to create a plurality of segmented images from one input image. The intensity values of pixels in regions identified during multi scale segmentation may be altered in intensity to differing degrees, to distinguish the regions according to size. Alternatively, the regions may be distinguished by the addition of false colour.
Figure 4 is a schematic view of a first embodiment of a fluorescence microscope used to image samples in accordance with the present invention.
An example of such a microscope is the NikonTM TE2000 microscope, as incorporated into the Amersham Biosciences IN Cell 1000 Analyzer_ system.
The microscope comprises a source 100 or 110 of electromagnetic radiation for example, in the optical range, 350-750nm, a cylindrical lens 120, a first slit mask 130, a first relay lens 140, a dichroic mirror 150, an objective lens 170, a microtiter plate 180 containing a two-dimensional array of sample wells 182, a tube lens 190, a filter 20O, a second slit mask 210 and a detector 220. These elements are arranged along optical axis OA with slit apertures 132, 212 in masks 130, 210 extending perpendicular to the plane of Figure 2. The focal lengths of lenses 140, 170 and 190 and the spacings between these lenses as well IS as the spacings between mask 130 and lens 140, between objective lens 170 and microtiter plate 180 and between lens 190 and mask 210 are such as to provide a confocal microscope. However, it will be recognized that a nonconfocal microscope can also be used.
In this embodiment, electromagnetic radiation from a lamp 100 or a laser 110 is focused to a line using a cylindrical lens 120. The shape of the line is optimized by a first slit mask 130. The slit mask 130 is depicted in an image plane of the optical system that is in a plane conjugate to the object plane. The illumination stripe formed by the aperture 132 in the slit mask 130 is relayed by lens 140, dichroic mirror 150 and objective lens 170 onto a microtiter plate 180 which contains a twodimensional array of sample wells 182. For convenience of illustration, the optical elements are depicted in cross-section and the well plate in perspective. The projection of the line of illumination onto well plate is depicted by line 184. As indicated by arrows A and B. well plate 180 may be moved in two dimensions (X, Y) parallel to the dimensions of the array by means not shown.
In an alternative embodiment, the slit mask 130 resides in a Fourier plane of the optical system that is in a plane conjugate to the objective back focal plane (BFP) 160. In this case the aperture 132 lies in the plane of the figure, the lens 140 relays the illumination stripe formed by the aperture 132 onto the back focal plane 160 ofthe objective 170 which transforms it into a line 184 in the object plane perpendicular to the plane of Figure 4.
En an additional alternative embodiment the slit mask 130 is removed entirely. According to this embodiment, the illumination source is the laser 110, the light from which is focused into the back focal plane 160 of the objective 170. This can be accomplished by the combination of the cylindrical lens 120 and the spherical lens 140 as shown in Figure 2, or the illumination can be focused directly into the plane 160 by the cylindrical lens 120.
An image of the sample area, for example a sample in a sample well 182, is obtained by projecting the line of illumination onto a plane within the sample, imaging the fluorescence emission therefrom onto a detector 220 and moving the plate 180 in a direction perpendicular to the line of illumination, synchronously with the reading of the detector 220. In the embodiment depicted in Figure 4, the fluorescence emission is collected by the objective lens 170, projected through the dichroic beamsplitter 150, and imaged by lens 190 through filters 200 and a second slit mask 210 onto a detector 220, such as is appropriate to a confocal imaging system having an infinity-corrected objective lens 170. The dichroic beamsplitter 150 and filter 200 preferentially block light at the illumination wavelength. The detector 220 illustratively is a camera such as the Roper CoolSnap HQ_, as incorporated into the Amersham Biosciences IN Cell 1000 Analyzer_ system. The detector may be either one dimensional or two dimensional. If a one dimensional detector is used, slit mask 210 is not needed. The illumination, detection and translation procedures are continued until the prescribed area has been imaged. Mechanical motion is simplified if the sample is translated at a continuous rate. Continuous motion is most useful if the camera read-time is small compared to the exposure-time. In a preferred embodiment, the camera is read continuously. The displacement d of the sample during the combined exposure-time and read-time may be greater than or less than the width of the illumination line W. exemplarily 0.5W <d <5W. All of the wells of a multiwell plate can be imaged in a similar manner.
Alternatively, the microscope can be configured to focus a line of illumination across a number of adjacent wells, limited primarily by the field-of view of the optical system. Finally, more than one microscope can be used simultaneously.
S The size and shape of the illumination stripe 184 is determined by the width and length of the Fourier transform stripe in the objective lens back focal plane 160. For example, the length of the line 184 is determined by the width of the line in 160 and conversely the width in 184 is determined by the length in 160. For diffraction-limited performance, the length of the illumination stripe at 160 is chosen to overfill the objective back aperture. It will be evident to one skilled in the art that the size and shape of the illumination stripe 184 can be controlled by the combination of the focal length of the cylindrical lens 120 and the beam size at 120, that is by the effective numerical aperture in each dimension, within the restrictions imposed by aberrations in the objective, and
the objective field of view.
The dimensions of the line of illumination 184 are chosen to optimize the signal to noise ratio. Consequently, they are sample dependent. Depending on the assay, the resolution may be varied between diffraction-limited, i. e., less than O.S,um, and approximately S m. The beam length is preferably detennined by the objective field of view, exemplarily between O.S and 1.5 mm.
A Nikon ELWD, 0.6 NA, lOX objective, for example, has a field of view of approximately 0.75 mm. The diffraction-limited resolution for 633 nm radiation with this objective is approximately 0.6 Am or approximately 1100 resolution elements.
The effective depth resolution is determined principally by the width of aperture 212 in slit mask 210 or the width of the one dimensional detector and the image magnification created by the combination of the objective lens 170 and lens 190. The best depth resolution of a confocal microscope approaches I 1lm. In the present application, a depth resolution of S-10,um may be sufficient or even advantageous.
For example, when the sample of interest, such as a live cell, contains insufficient fluorophore concentration in a diffraction-limited volume to permit an adequate signal-to-noise image in a sufficiently brief imageacquisition time, it is advantageous to illuminate and collect the emission from a larger than diffraction-limited volume. A similar situation prevails in the case of video-rate kinetics studies of transient events such as ion-channel openings. Practically, this is accomplished by underfilling the back aperture of the objective lens, which is equivalent to increasing the diameter of the illumination aperture. The effective numerical aperture ("NA") of the illumination is less than the NA of the objective. The fluorescence emission is, however, collected with the full NA of the objective lens. The width of aperture 212 must be increased so as to detect emission from the larger illumination volume. At an aperture width a few times larger than the diffraction limit, geometrical optics provides an adequate approximation for the size of the detection-volume element: Lateral Width: au = d/M, Axial Width: zig = A4anc', where M is the magnification, do is the width of aperture 212 and or is the half angle subtended by the objective 170. It is an important part of the present invention that the illumination aperture 132 or its equivalent in the embodiment having no aperture and the detection aperture 212 be independently controllable.
Multi-wavelength fluorescence imaging is preferred for certain types of assays. In an alternative embodiment of the invention, the microscope may employ a multiple illumination wavelengths. In this way, image data can be generated for the same area being imaged in each of a plurality of different colour channels simultaneously.
Continuous closed-loop control of the relative position of the sample plane and the object plane is provided in an embodiment of the present invention, depicted in Figure 5. This system utilizes two independent beams of electromagnetic radiation. One, originating from S5, iS focused on the continuous surface, exemplarily the bottom of a microtiter plate. The other, originating from S4, is focused on the discontinuous surface, exemplarily the well bottom of a microtiter plate. In one embodiment, the beams originating from S 4 and S5 have wavelengths X4 and \5, respectively. \4 is collimated by L4, apertured by iris I4, and focused onto the discontinuous surface by the objective lens OL. \5 is collimated by L5, apertured by iris Is, and focused onto the continuous surface by the lens CFL in conjunction with the objective lens OL.
The reflected light is focused onto the detectors D4 and D5 by the lenses IL4 and IL5, respectively. The partially transmitting, partially reflecting mirror, DM4, is preferentially dichroic, reflecting X4 and \5 and transmitting) and by,, n=1,2,3.
The mirrors, M4, M5 and M6, are partially transmitting, partially reflecting. In the case that X4 and)5 are distinct, M6 is preferentially dichroic.
According to the embodiment wherein the sample resides in a microtiter plate, )\4 is focused onto the well bottom. The object plane can be offset from the well bottom by a variable distance. This is accomplished by adjusting L4 or alternatively by an offset adjustment in the servo control loop. For convenience of description, it will be assumed that)4 focuses in the object plane.
The operation of the autofocus system is as follows. If the bottom of the sample well is not in the focal plane of objective lens OL, detector D4 generates an error signal that is supplied through switch SW to the Z control. The Z control controls a motor (not shown) for moving the microtiter plate toward or away from the objective lens. Alternatively, the Z control could move the objective lens. If the bottom PB of the microtiter plate is not at the focal plane of the combination of the lens CFL and the objective lens OL, detector D5 generates an error signal that is applied through switch SW to the Z control. An XY control controls a motor (not shown) for moving the microtiter plate in the object plane OF of lens OL.
As indicated, the entire scan is under computer control. An exemplary scan follows: At the completion of an image in a particular well, the computer operates SW to switch control of the servo mechanism from the error signal generated by D4 to that generated by D5; the computer then directs the XY control to move the plate to the next well, after which the servo is switched back to D4.
The "coarse" focusing mechanism utilizing the signal from the bottom of the plate is used to maintain the position of the sample plane to within the well to-wel1 variations in the thickness of the plate bottom, so that the range over which the "fine" mechanism is required to search is minimized. If, for example, the diameter of the iris Is is 2 mm and IL5 is l 00 mm, then the image size on the detector will be 100 m. Similarly, if the diameter of the iris 14 is 0.5 mm and IL4 is l 00 mm, then the image size on the detector will be 400 m. The latter is chosen to be less sensitive so as to function as a "coarse" focus.
As with the single-beam embodiment described above, the wavelengths >4 and \5 are necessarily distinct from the sample fluorescence, and preferentially wavelengths that cannot excite appreciable fluorescence in the sample. Thus, X4 and \5 are preferentially in the near infrared, such as 800-1000 nm. In addition, the two wavelengths are preferably distinct, for example X4 = 830 nm, \5 = 980 nm.
In an alternative embodiment of two-beam autofocus, ?4 = 15 and the two beams may originate from the same source. Preferentially, the two beams are polarized perpendicular to one another and M6 is a polarizing beamsplitter.
Pseudo-closed loop control is provided in the preferred embodiment of single-beam autofocus which operates as follows. At the end of a scan the computer operates SW to switch control to a sample-and-hold device which maintains the Z control output at a constant level while the plate is moved on to the next well after which SW is switched back to D4.
A detection device is used having manifold, independent detection elements in a plane conjugate to the object plane. As discussed above, line illumination is advantageous principally in applications requiring rapid imaging.
The potential speed increase inherent in the parallelism of line illumination as compared to point illumination is, however, only realized if the imaging system is capable of detecting the light emitted from each point of the sample along the illumination line, simultaneously.
It is possible to place a charge-coupled device (CCD), or other camera, at the output of the imaging systems described above (White et al., US 5,452, l 25 and Brakenhoff and Visscher, J. Microscopy l 7 l l 7-26 l 993) ). The resulting apparatus has three significant disadvantages. One is the requirement of rescanning the image onto the two-dimensional detector, which adds unnecessary complexity to the apparatus. Another is the requirement of a full two-dimensional detector having sufficient quality over the 1000 pixel x 1000 pixel array that typically constitutes the camera. The third disadvantage is the additional time required to read the full image from the two-dimensional device.
In order to avoid these disadvantages and optimize not only imaging speed, within the constraints of high-sensitivity and low-noise detection, but also throughput, a continuous-read line-camera is used and in a preferred embodiment a rectangular CCD is used as a line-camera. Both embodiments have no dead-time between lines within an image or between images. An additional advantage is that a larger effective field-of-view is achievable in the stage-scanning embodiment, discussed below.
The properties required of the detection device can be further clarified by considering the following preferred embodiment. The resolution limit of the objective lens is < 1 m, typically0.5 m, and the detector comprises an array of1000 independent elements. Resolution, field-of-view (FOV) and image acquisition-rate are not independent variables, necessitating compromise among these performance parameters. In general, the magnification of the optical system is set so as to image as large a FOV as possible without sacrificing resolution. For example, a 1 mm fieldof-view could be imaged onto a 1000 element array at l-pm pixelation. If the detection elements are 20-,um square, then the system magnification would be set to 20X. Note that this will not result in 1-pm resolution. Pixelation is not equivalent to resolution. If, for example, the inherent resolution limit of the objective lens is 0.5 rim and each 0.5,um X 0.5 Em region in the object plane is mapped onto a pixel, the true resotiorr of the resulting digital image is not 0.5 1lm. To achieve true 0.5-pm resolution, the pixelation would need to correspond to a region 0. 2 rim X 0.2 Em in the object plane. In one preferred embodiment, the magnification of the imaging system is set to achieve the true resolution of the optics.
Presently, the highest detection efficiency, lowest noise detection devices having sufficient read-out speed for the present applications are CCD cameras, such as the Roper CoolSnap HQ_ described above. In Figures 6A, 6B and 6C, a rectangular CCD camera is depicted having an m x n array of detector elements where m is substantially less than n. The image of the fluorescence emission covers one row that is preferably proximate to the read out register. This minimizes transfer time and avoids accumulating spurious counts into the signal from the rows between the illuminated row and the read out register.
In principle, one could set the magnification of the optical system so that S the height of the image of the slit on the CCD camera is one pixel. In practice, it is difficult to maintain perfect alignment between the illumination line and the camera row-axis, and even more difficult to maintain alignment among three or more cameras and the illumination in a multi-wavelength embodiment. By binning together a few of the detector elements, exemplarily two to five, in each column of the camera the alignment condition can be relaxed while suffering a minimal penalty in read-noise or read-time.
An additional advantage of the embodiment having one or more rectangular CCD cameras as detection devices in conjunction with a variable width detection spatial filter, each disposed in a plane conjugate to the object plane, is elucidated by the following. As discussed above, in one embodiment of the present invention the detection spatial filter is omitted and a line-camera is used as a combined detection spatial filter and detection device. But as was also discussed above, a variable-width detection spatial filter permits the optimization of the detection volume so as to optimize the sample-dependent signal-to-noise ratio. The following preferred embodiment retains the advantage of a line-camera, namely speed, and the flexibility of a variable detection volume. The magnification is set so as to image a diffraction-limited line of height h onto one row of the camera. The width of the detection spatial filter d is preferably variable h <d <lOh. The detectors in the illuminated columns of the camera are binned, prior to reading, which is an operation that requires a negligible time compared to the exposure- and read-times.
In an embodiment where multiple CCD cameras are employed, the cameras may be Princeton Instruments NTE/CCD-1340/100-EMD. The read rate in a preferred embodiment is I MHz at a few electrons of read-noise. The pixel format is 1340xlOO, and the camera can be wired to shift the majority of the rows (80%) away from the region of interest, making the camera effectively 1340x20.
In addition to the above mentioned advantage of a continuous read camera, namely the absence of dead-time between successive acquisitions, an additional advantage is that it permits the acquisition of rectangular images having a length limited only by the extent of the sample. The length is determined by the lesser of the camera width and the extent of the line illumination. In a preferred embodiment the sample is disposed on the bottom of a well in a 96-well microtiter plate, the diameter of which is 7 mm. A strip I 1lm x I mm is illuminated and the radiation emitted from the illuminated area is imaged onto the detection device. The optical train is designed such that the field-of-view is lmm2. According to the present invention, an image of the well-bottom can be generated at l-pm pixelation over a l X 7-mm field.
In an embodiment of the present invention, assays are performed on live cells. Live-cell assays frequently require a reasonable approximation to physiological conditions to run properly. Among the important parameters is temperature. It is desirable to incorporate a means to raise and lower the temperature, in particular, to maintain the temperature of the sample at 37C. In another embodiment, control over relative humidity, and/or CO2 and/or O2 is necessary to maintain the viability of live cells. In addition, controlling humidity to minimize evaporation is important for small sample volumes.
Three embodiments providing a microtiter plate at an elevated temperature, preferably 37 degrees C, follow.
The imaging system preferably resides within a light-proof enclosure. In a first embodiment, the sample plate is maintained at the desired temperature by maintaining the entire interior of the enclosure at that temperature. At 37 degrees C, however, unless elevated humidity is purposefully maintained, evaporation cooling will reduce the sample volume limiting the assay duration.
A second embodiment provides a heated cover for the microwell plate which allows the plate to move under the stationary cover. The cover has a single opening above the well aligned with the optical axis of the microscope.
This opening permits dispensing into the active well while maintaining heating and limited circulation to the remainder of the plate. A space between the heated cover plate and microwell plate of approximately 0.5 mm allows free movement of the microwell plate and minimizes evaporation. As the contents of the interrogated well are exposed to ambient conditions though the dispenser opening for at most a few seconds, said contents suffer no significant temperature change during the measurement.
In a third embodiment, a thin, heated sapphire window is used as a plate bottom enclosure. A pattern of resistive heaters along the well separators maintain the window temperature at the desired level.
In additional embodiments, any of the above disclosed methods can be variously combined.
In an additional preferred embodiment of the invention, employed in automated screening assays, the imaging system is integrated with plate handling robots, such as the Zymark Twister_.
Figure 7 shows a schematic illustration of data processing components of a system arranged in accordance with an embodiment of the invention. The system includes a cell analysis system 400, based on the Amersham Biosciences IN Cell Analyzer_ system. The cell analysis system 400 includes the detectors Do, Do, Do, Do, and D5, each of which may be a microscope as shown in Figure 4. The cell analysis system 400 further comprises switch SW, a control unit 401, an image data store 402 and an Input/Output (T/O) device 404.
An associated computer terminal 405 includes a central processing unit (CPU) 408, memory 410, a data storage device such as a hard disc drive 412 and I/O devices 406 which facilitate interconnection of the computer with the MDPU and the computer with a display element 432 of a screen 428 via a screen l/O device 430, respectively. Operating system programs 414, such as Microsoft Windows XPTM, are stored on the hard disc drive 412, and control, in a known manner, low level operation of the computer terminal 405. Program files and data 420 are also stored on the hard disc drive 412, and control, in a known manner, outputs to an operator via associated devices and output data stored on the hard disc drive. The associated devices include a display 432 as an element of the screen 428, a pointing device (not shown) and keyboard (not shown), which receive input from, and output information to, the operator via further l/O devices (not shown). Included in the program files 420 stored on the hard drive 412 are an image processing and analysis application 416, an assay control application 418, and a database 422 for storing image data received from the cell analysis system 400 and output files produced during data processing.
The image processing and analysis application 416 includes image processing and analysis software packages. A method according to an embodiment of the invention may be implemented as software within the image processing and analysis application 416.
The performance of a scan using the cell analysis system 400 is controlled using control application 418, and the image data are acquired. In an embodiment, the control application acts in concert with the autofocus system described with reference to Figure 5. After the end of acquisition of image data for at least one well in a microtiter plate by at least one detector D', D2, D3, the image data are transmitted to the computer 405 and stored in the database 422 on the computer terminal hard drive 412, at which point the image data can be processed using the image processing and analysis application 416.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged.
Note that, although fluorescence is referred to above, an embodiment of the invention may be used to analyse biological materials exhibiting other types of luminescence such as chemiluminescence and phosphorescence. If a biological structure of interest is marked with a reporter molecule that produces a fluorescent signal, then either a conventional fluorescence microscope, or a confocal based fluorescence microscope may be used. If the reporter molecule produces luminous light, then a suitable device such as a luminometer may be used. It will be understood that a method according to the present invention may also be applied to images derived from transmitted light microscopy.
Transmitted light images may be pre-processed using techniques, such as texture detection transform, such that a method according to the present invention can then be used to process the images.
Pixel identification may be based on comparison of the intensitycharacteristics of a first pixel with some or all of its neighbouring pixels, and some or all of the pixels distant from it.
If a DNA construct contains both a gene encoding a fluorescent product and translocation control elements, and the cells are examined using a microscope, the location of the reporter may also be determined.
In methods according to the present invention, the fluorescence of cells transformed or transfected with the DNA construct may suitably be measured by optical means in for example; a spectrophotometer, a fluorimeter, a fluorescence microscope, a cooled charge-coupled device (CCD) imager (such as a scanning imager or an area imager), a fluorescence activated cell sorter, a confocal microscope or a scanning confocal device, where the spectral properties of the cells in culture may be determined as scans of light excitation and emission.
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments.
Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (16)

  1. Claims 1. A method of scanning biological specimens, the method comprising
    processing an input image (U) to produce an output image (N(U)), the image comprising regions that represent biological structures, the method comprising segmentation of the image to identify such regions therein, the input image comprising at least one first pixel (a), neighbouring pixels (a'), and distant pixels (a"), the neighbouring pixels surrounding the first pixel, each pixel having an intensity characteristic, wherein the processing comprises categorizing pixels of the image using a segmentation algorithm having selected categorizing parameters, wherein the first pixel is identified to be within one of said regions if: the first pixel has a first intensity characteristic, and the neighbouring pixels have an intensity characteristic which is similar to said first intensity characteristic, and the distant pixels have a second intensity characteristic, which is different to said first intensity characteristic.
  2. 2. The method of claim 1, the method comprising selecting a set of neighbouring pixels to define a size characteristic of said regions being identified.
  3. 3. The method of claim I or claim 2, wherein the processing of the image is performed on an area-by-area basis, and further comprises applying a transform to each area.
  4. 4. The method of any claim 3, wherein the input image (U) is processed to produce the output image (N(U)) in accordance with NTH(U; I k; K) = k]2 1, where K > k > I where <U> is the transform, iS a parameter related to the size within a known range of a region, and K and k are tuning coefficients.
  5. 5. The method of claims I to 4, wherein the biological structures comprise a cell nucleus.
  6. 6. The method of claims 1 to 4, wherein the biological structures comprise a cytoplasmic structure.
  7. 7. The method of any preceding claim, wherein the first intensity characteristic and the second intensity characteristic are represented by different intensity values, and wherein the value of the first intensity characteristic is higher than the value of the second intensity characteristic.
  8. 8. The method of claims 1 to 6, wherein the first intensity characteristic and the second intensity characteristic are represented by different intensity values, and wherein the value of the second intensity characteristic is higher than the value of the first intensity characteristic.
  9. 9. The method of any preceding claim, further comprising the step of altering the intensity characteristics of one or more pixels according to the identification of the first pixel.
  10. 10. The method of any preceding claim, further comprising the step of processing the input image U to produce an output image (N(U)) at least twice, wherein at least one categorizing parameter is varied in value before each step of processing, such that a plurality of output images (N(U)X) is generated.
  11. 1 1. The method of claim 10, further comprising the step of producing a synthesized image from the plurality of output images (N(U)X) according to a pixel-by-pixel synthesizing process.
  12. 12. The method of claim 10 or claim 11, wherein the categorizing parameter is related to a size within a known range, such that each of the plurality of output images (N(U)X) comprises identified regions of a differing sizes within known ranges.
  13. 13. The method of claim 12, wherein the selection of a categorizing parameter value for each step of processing the input image (U) to produce an output image (N(U)) is performed in accordance with a scaling equation.
  14. 14. Computer software arranged to perform the method of any preceding claim.
  15. 1 S. A data carrier storing the computer software of claim 14.
  16. 16. Apparatus arranged to perform the method of any of claims I to 13.
GB0407121A 2004-03-30 2004-03-30 Image segmentation using local and remote pixel value comparison Withdrawn GB2412803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0407121A GB2412803A (en) 2004-03-30 2004-03-30 Image segmentation using local and remote pixel value comparison

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0407121A GB2412803A (en) 2004-03-30 2004-03-30 Image segmentation using local and remote pixel value comparison

Publications (2)

Publication Number Publication Date
GB0407121D0 GB0407121D0 (en) 2004-05-05
GB2412803A true GB2412803A (en) 2005-10-05

Family

ID=32247496

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0407121A Withdrawn GB2412803A (en) 2004-03-30 2004-03-30 Image segmentation using local and remote pixel value comparison

Country Status (1)

Country Link
GB (1) GB2412803A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7817841B2 (en) 2005-11-12 2010-10-19 General Electric Company Time-lapse cell cycle analysis of unstained nuclei

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0424912A2 (en) * 1989-10-27 1991-05-02 Hitachi, Ltd. Region extracting method and three-dimensional display method
US5452367A (en) * 1993-11-29 1995-09-19 Arch Development Corporation Automated method and system for the segmentation of medical images
EP0698855A1 (en) * 1989-06-26 1996-02-28 Fuji Photo Film Co., Ltd. Method and apparatus for classifying picture elements in radiation images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0698855A1 (en) * 1989-06-26 1996-02-28 Fuji Photo Film Co., Ltd. Method and apparatus for classifying picture elements in radiation images
EP0424912A2 (en) * 1989-10-27 1991-05-02 Hitachi, Ltd. Region extracting method and three-dimensional display method
US5452367A (en) * 1993-11-29 1995-09-19 Arch Development Corporation Automated method and system for the segmentation of medical images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7817841B2 (en) 2005-11-12 2010-10-19 General Electric Company Time-lapse cell cycle analysis of unstained nuclei

Also Published As

Publication number Publication date
GB0407121D0 (en) 2004-05-05

Similar Documents

Publication Publication Date Title
US7957911B2 (en) Method and apparatus for screening chemical compounds
AU2004262115B2 (en) Analysing biological entities
US6400487B1 (en) Method and apparatus for screening chemical compounds
KR102523559B1 (en) A digital scanning apparatus
CA2324262C (en) Confocal microscopy imaging system
US8131052B2 (en) Method of processing an image
US8878923B2 (en) System and method for enhanced predictive autofocusing
US11347044B2 (en) Low resolution slide imaging and slide label imaging and high resolution slide imaging using dual optical paths and a single imaging sensor
CA2426798C (en) Method and apparatus for screening chemical compounds
US10955650B2 (en) Two pass macro image
Reilly et al. Advances in confocal microscopy and selected applications
GB2412803A (en) Image segmentation using local and remote pixel value comparison
Dixon et al. Confocal scanning of genetic microarrays
AU2003204937B2 (en) Confocal microscopy imaging system
Gough et al. Requirements, Features, Screening Platforms

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)