WO2015128600A1 - Method for identifying cell features - Google Patents

Method for identifying cell features Download PDF

Info

Publication number
WO2015128600A1
WO2015128600A1 PCT/GB2015/050131 GB2015050131W WO2015128600A1 WO 2015128600 A1 WO2015128600 A1 WO 2015128600A1 GB 2015050131 W GB2015050131 W GB 2015050131W WO 2015128600 A1 WO2015128600 A1 WO 2015128600A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
value
function
longest
image
Prior art date
Application number
PCT/GB2015/050131
Other languages
French (fr)
Inventor
James DILKES
Kamran MOGUL
Original Assignee
Perkinelmer Improvision Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perkinelmer Improvision Limited filed Critical Perkinelmer Improvision Limited
Publication of WO2015128600A1 publication Critical patent/WO2015128600A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • This invention relates to identifying cell features, for example cell boundaries, in initial image data representing a sample of organic material.
  • PDE partial differential equation
  • noise/background and image segmentation techniques which are typically used to identify or locate objects and boundaries and so on in images.
  • segmentation can make use of partial differential equation techniques. Both of these steps may make use of partial differential equation based filters.
  • non-linear diffusion equations and the solving of these equations using numerical methods can give rise to powerful image processing techniques for filtering images and segmentation of images in the case of samples of organic material, in particular such non-linear diffusion filters are known to be useful for identifying features such as cell boundaries in image data representing samples of organic material.
  • image processing techniques for filtering images and segmentation of images in the case of samples of organic material
  • non-linear diffusion filters are known to be useful for identifying features such as cell boundaries in image data representing samples of organic material.
  • non-linear partial differential equation based filters have been proposed to be used in problems such as finding cell boundaries. These include the Perona Malik (PM) filter, the slowed mean curvature flow (SMCF) and the geodesic mean curvature flow (GMCF) filter. These last two are examples of curvature driven non-linear diffusion filtering equations. These filters and their corresponding discrete forms which may be solved by numerical methods such as successive over relaxation (SOR) are described for example in "3D early embryogenesis image filtering by nonlinear partial differential equations" - Z Kriva et al, as mentioned above, in their sections 2 and 3.2.
  • SOR successive over relaxation
  • K is a scalar parameter and s is the Gaussian gradient iVG a * u
  • This edge preserving function can be used during filtering and segmentation and has the effect of ensuring that the filter has maximum effect on those parts of the image which should be altered by the filter i.e. noise/background and a minimum effect on those parts of the image which represent features/edges.
  • the value of g(s) is calculated for a whole set of image data the resulting processed set of data can be termed a conduction image.
  • represents a time step defined by total time T/N where N steps are used in the filtering process.
  • ⁇ % - ( ⁇ ⁇ ⁇ (e ) g ⁇ Qjffj /(m(V tJk )
  • V ijk represents each finite volume from the finite volume space
  • V tjk discretisation volume of V tjk .
  • the planar sides of V ijk are denoted by The edge connecting the centre of V ijk
  • e 2 is a regularisation parameter, typically ⁇ «1 , and
  • This set of equations may be solved using successive over relaxation (SOR) techniques as alluded to above. That is to say the eqn 4 above and its coefficients (eqn 5) represent a set of linear simultaneous equations which may be solved by the standard matrix mathematic technique of SOR.
  • SOR successive over relaxation
  • the edge preserving function g(s) is calculated using u n" ⁇ i.e. the data from the last iteration, such that g(s) needs to be calculated for each iteration.
  • the present invention is aimed at providing methods and apparatus which can make use of the above type of image processing techniques and yield useful results efficiently, in practical periods, for a range of different types of initial image data.
  • a method for identifying ceil features in initial image data representing a sample of organic material comprising the steps of:
  • using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the method comprises the step of optimising the scalar parameter by the steps of:
  • a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
  • ceil features may be ceil boundaries.
  • Successive trial values of the parameter may be double the respective previous trial value.
  • the first trial value may be chosen to be 1/2.
  • the edge preserving function g can be considered to have the form g(s;K) where s is an input to the function which is dependent on the image data and K represents said scalar parameter.
  • the edge preserving function g may be as follows: d(s) - 7T ⁇ 7 - , K ⁇ 0 (eqn 1)
  • the variance (standard deviation 2 , or ⁇ 2 ) of the Gaussian function used in determining the Gaussian gradient may be set in dependence on the physical calibration of the initial image data, that is the size in the sample which each image data element represents.
  • the resulting set of values of the function can be termed a conduction image.
  • the edge preserving function may be chosen so that when used to operate on each image data element in a selected set of input image data, the function generates resulting output values which are within a predetermined range.
  • the range is from 0 to a maximum value.
  • the maximum value may be 1 .
  • ideally image data elements identified as features/edges by the function will have values of 0 or close to 0 whilst ideally image data elements identified as background/noise will have values of the maximum value or close to the maximum value.
  • Using the first nonlinear filter may comprise using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter.
  • the step of filtering the initial image data may comprise iterative application of the first nonlinear diffusion filter. In such a case, the step of optimising the scalar parameter may be carried out for each application of the first nonlinear diffusion filter.
  • the initial image data may be used as the input image data for a first application of the first nonlinear diffusion filter.
  • the selected set of input image data used in the step of optimising the scalar parameter, for use in the first application of the first nonlinear diffusion filter may be the initial image data.
  • the filtered image data resulting from one application of the first nonlinear diffusion filter may be used as the input image data for a next application of the first nonlinear diffusion filter.
  • the selected set of input image data used in the step of optimising the scalar parameter where this is carried out for each application of the first nonlinear diffusion filter may be the input image data which is being used for the respective application of the first nonlinear diffusion filter.
  • the method may comprise the step of determining how many iterations should be performed by monitoring the value of the optimised scalar parameter for each iteration and ceasing further iterations once determining that the value of the optimised parameter for successive iterations are equal to one another within a predetermined tolerance.
  • Using the second nonlinear filter may comprise using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter.
  • the step of segmenting the filtered image data may comprise iterative application of the second nonlinear diffusion filter.
  • the step of optimising the scalar parameter will be carried out only once during the step of segmenting the filtered image data. This is because the same values of output of the edge preserving function for each data element may be used in multiple iterations during segmentation.
  • the method may comprise performing a predetermined number of iterative applications of the second nonlinear diffusion filter.
  • the predetermined number of iterations may be set by a user, or factory set.
  • the method may comprise determining the number of iterative applications of the second nonlinear diffusion filter which are used in dependence of characteristics of the input or output data.
  • Both using the first nonlinear filter and using the second nonlinear filter may comprise using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter.
  • the same edge preserving function may be used in using both of the filters.
  • optimising of the scalar parameter is likely to be carried out separately since the input data for the optimisations is liable to be different.
  • At least one of the first and second nonlinear filters may comprise a curvature filter. At least one of the first and second nonlinear filters may comprise one of: a Perona-Malik filter, a slowed mean curvature flow filter and a geodesic mean curvature flow filter.
  • both filters are geodesic mean curvature flow filters.
  • one or both of the first and second nonlinear filters is treated as a discrete filter.
  • the discrete filter may be dependent on a time step size, which is defined by the total time for filtering divided by the number of filtering steps.
  • the time step size may be determined based on the physical calibration of the image, that is the size in the sample which each image data element represents.
  • Successive Over Relaxation (SOR) techniques may be used in application of the first and/or second nonlinear filter.
  • the method may comprise the step of splitting the filtered image data into sections prior to the segmentation step and performing the step of segmentation on each section.
  • the sections may be selected with the aim of each section including a complete cell, and preferably only a single complete cell. Splitting the filtered image into sections tends to reduce the processing time required to complete segmentation.
  • the data may be split into sections on the basis of a point identified within each cell and a maximum cell size parameter.
  • each section may comprise image data within a region defined by said point and a boundary spaced from said point by a distance which is dependent on the maximum cell size parameter.
  • a section might, for example, be a cube, cuboid or a sphere.
  • the section might, for example, be a square, rectangle or circle.
  • the maximum cell size parameter may be received from user input or determined from the data.
  • the point identified within each ceil may be received from user input or determined from internal feature image data.
  • the internal feature image data may be provided separately from the main image data.
  • the internal feature image data may be representative of the nuclei of ceils, whereas the main image data is representative of cell walls. As is well understood such separation of image data may be achieved using different stains on the sample.
  • the main image data may be received on one channel of an inspecting microscope and the internal feature image data may be received on another channel of the inspecting microscope.
  • segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
  • a method for identifying cell features in image data representing a sample of organic material comprising the steps of:
  • segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
  • ceil features may be ceil boundaries.
  • segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
  • a method for identifying ceil boundaries in image data representing a sample of organic material comprising the steps of:
  • segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
  • This technique may be used for 2D image data and 3D image data.
  • the method may comprise determining the selected point and the selected plane by selecting an initial seed point in a cell and scanning in two directions in a first dimension away from the initial seed point the processed image data set to identify respective cell edges, repeating the scanning process for other initial seed points and based on a longest determined spacing between identified cell edges identifying a first dimension axis for the seed surface, then choosing the selected point to be a midpoint of the first dimension axis of the ceil and choosing the selected plane to be that plane which is orthogonal to the first dimension axis and contains the selected point, and the step of generating one of an ellipse and an ellipsoid to represent the seed surface comprises generating an ellipsoid using the first dimension axis, the longest major axis and the longest minor axis.
  • the method may comprise the further step of modifying the seed surface using the conduction image
  • Modifying the seed surface may comprise inheriting edge shapes from the conduction image.
  • the seed surface may be multiplied by the conduction image. Where feature areas in the conduction image have the value of zero, this sets to zero portions of the seed surface that extend into feature areas and in effect cuts away these portions of the seed surface.
  • Modifying the seed surface may comprise filling any holes in the seed surface.
  • Modifying the seed surface may comprise discarding any seed surface outside of a respective main seed surface.
  • the step of segmenting the filtered image data may comprise using the seed surface to fill any holes in the conduction image.
  • segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
  • determining a location as a centre of a cell in a first dimension by selecting an initial seed point in a ceil and scanning in two directions in the first dimension away from the initial seed point the processed image data set to identify respective cell edges, repeating the scanning process for other initial seed points and selecting a location as a centre of the ceil based on the results of the scanning processes and calculating a first dimension axis for the seed surface; having identified a centre of a ceil in said first dimension, scanning, in intervals through 360 degrees around said centre in a plane orthogonal to the first dimension, the processed image data set to identify respective ceil edges and identify a longest major axis for the seed surface;
  • a computer program comprising code portions which when loaded and run on a computer cause the computer to carry out the steps of any one of the methods defined above.
  • image processing apparatus comprising a processing unit arranged under the control of software to carry out the steps of any one of the methods defined above.
  • image acquiring and processing apparatus comprising a microscope for acquiring image data of a sample, and a processing unit arranged under the control of software to carry out the steps of any one of the methods defined above in respect of the acquired image data.
  • a computer arranged under the control of software to identify ceil features in initial image data representing a sample of organic material by carrying out the steps of:
  • using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the method comprises the step of optimising the scalar parameter by the steps of:
  • a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
  • image processing apparatus comprising a processing unit arranged under the control of software to identify cell features in initial image data representing a sample of organic material by carrying out the steps of:
  • using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the method comprises the step of optimising the scalar parameter by the steps of:
  • a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
  • image processing apparatus arranged for identifying cell features in initial image data representing a sample of organic material, the apparatus comprising :
  • a filtering module for filtering the initial image data using a first nonlinear diffusion filter
  • a segmentation module for segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features; wherein using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the apparatus comprises an optimising module for optimising the scalar parameter on the basis of:
  • a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
  • a computer arranged under the control of software to identify cell features in initial image data representing a sample of organic material by carrying out the steps of:
  • segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
  • image processing apparatus comprising a processing unit arranged under the control of software to identify ceil features in initial image data representing a sample of organic material by carrying out the steps of:
  • segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
  • image processing apparatus arranged for identifying cell features in initial image data representing a sample of organic material, the apparatus comprising:
  • a filtering module for filtering the image data using a first nonlinear diffusion filter
  • a segmentation module for segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features
  • segmenting the filtered image data includes cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning comprises: using an edge preserving function which is operable on each image data element in the filtered image and generates resulting output values which are within a predetermined range;
  • a computer arranged under the control of software to identify ceil features in initial image data representing a sample of organic material by carrying out the steps of:
  • segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
  • image processing apparatus comprising a processing unit arranged under the control of software to identify cell features in initial image data representing a sample of organic material by carrying out the steps of:
  • segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
  • image processing apparatus arranged for identifying cell features in initial image data representing a sample of organic material, the apparatus comprising:
  • a filtering module for filtering the image data using a first nonlinear diffusion filter
  • a segmentation module for segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features
  • segmenting the filtered image data comprises, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which calculating a seed surface comprises: using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
  • the computer or apparatus defined in anyone of the aspects above may comprise a display for displaying processed images to a user.
  • Figure 1 schematically shows an image acquiring and processing apparatus
  • Figure 2 is a flow chart showing a process for identifying cell features in initial image data
  • FIG. 3 is a flow chart showing more details of a filtering step of the process shown in Figure 2;
  • Figure 4 is a flow chart showing more details of a segmentation step of the process shown in Figure 2;
  • Figure 5 is a flow chart showing a process for optimising a parameter used in the filtering step and/or segmentation step of the process shown in Figure 2;
  • Figure 6 is a flow chart showing more detail of a background subtraction step of the segmentation process shown in Figure 4;
  • Figure 7 is a flow chart showing more detail of a step of creating a seed surface as part of the segmentation process shown in Figure 4;
  • Figure 8 shows an example initial image of a sample of organic material
  • Figure 9 shows the results after filtering the image shown in Figure 8, splitting it into a sub image and performing a background subtraction in accordance with an embodiment of the present invention
  • Figure 10 shows a conduction image corresponding to the image shown in Figure 9;
  • Figure 11 shows the conduction image of Figure 10 and schematically illustrates initial stages of calculating a seed surface for the cell shown in the conduction image;
  • Figure 12 schematically shows an initial seed surface developed as part of an embodiment of the present invention
  • Figure 13 shows a modified seed surface, based on that shown in Figure 12, but after having been multiplied by the conduction image as shown in Figure 10;
  • Figure 14 shows the seed surface shown in Figure 13 after segmentation carried out in accordance with an embodiment of the present invention.
  • Figure 15 shows the initial image as shown in Figure 8 overlaid with a segmented ceil edge determined by a process embodying the present invention and corresponding to the segmented surface shown in Figure 14 - this being an example output image at the end of the image processing steps to which the present application is directed.
  • Figure 1 schematically shows image acquiring and processing apparatus which can be used for acquiring image data, processing that image data and displaying the resulting processed image data to a user. It is possible that all of these steps may be performed with one set of apparatus as depicted in Figure 1 . However it will equally be appreciated that it would be possible to acquire the image data separately from any processing activity and this again could be separate from display of the resulting data.
  • the present invention relates to image acquiring and processing apparatus as shown in Figure 1 as well as processing apparatus per se and methods for acquiring, processing and displaying image data, processing image data, and processing and displaying image data.
  • the image acquiring and processing apparatus shown in Figure 1 comprises a microscope 1 for acquiring image data from a sampie 2.
  • the microscope 1 may be arranged for acquiring image data on two different channels.
  • the microscope 1 may be a confocai microscope and may be arranged for obtaining membrane stain image data on a first channel A and nucleus stain image data on a second channel B.
  • the microscope 1 is connected to an image processing unit 3 which in turn in this embodiment is connected to a display 4 for displaying images to a user.
  • the processing unit 3 comprises memory, a processor and a storage device (not shown).
  • the image acquiring and processing apparatus may display to the user initial images based on initial image data gathered by the microscope 1 and/or processed images based on processed image data calculated by the image processing unit 3 from the initial image data.
  • the present apparatus and processes are directed at identifying cell features in initial image data and providing output image data which indicates identified cell features.
  • the apparatus and methods are directed at identifying cell boundaries in initial image data and outputting processed image data which identifies the cell boundaries. This can be useful for the user in terms of simply having the cell boundaries identified and indicated or useful for further calculations or analysis.
  • Figures 8-1 5 show example images which are relevant to the present techniques.
  • Figure 8 shows an example of an initial image which can be seen to be made up of two ceils bordering one another
  • Figure 1 5 shows a final output image in which a cell boundary has been identified and laid over the top of the initial image as a dark solid line.
  • the current apparatus and methods are provided for automatically drawing on such cell boundaries to initial image data.
  • the sample may be 3D sample extending a number of ceils in each direction.
  • the present methods and apparatus can be used to draw 3D (or 2D) cell boundaries for all of the cells (within reason) in a bulk sample.
  • initial image data is made up of a number of pixels (2D) or a number of voxels (3D).
  • image data element as a more general term which may represent a pixel or a voxel or perhaps a group of pixels of voxels in at least some instances. Further at some places in the description and/or drawings the word pixel is used for convenience where it is clear that the image data element might in fact be a voxel or a pixel.
  • each set of image data can be considered to have a physical calibration which represents the size of a pixel having gone through the optical system.
  • one pixel may be 6.7 ⁇ in width and 6.7 ⁇ in height. If the optics lead to a 100x magnification this means that the size in the sample represented by one pixel may be 0.067 ⁇ x 0.067 ⁇ .
  • a mechanical drive will be used for moving the focus in the third dimension and the step size in this direction might be 0.1 ⁇ leading to an overall physical calibration of 0.067 ⁇ x 0.067 ⁇ x 0.1 ⁇ ,
  • Figure 2 shows, at a general level, the present method for identifying ceil features in initial image data and outputting image data which indicates identified ceil features.
  • the apparatus of Figure 1 can be programmed to carry out this method.
  • the main inputs to this process in the present embodiment are initial image data, particularly membrane stain image data 5 as may be acquired on channel A by the microscope 1 and data concerning points inside cells 6 which may be provided by the user or, for example, be indicative of nuclei inside the ceils and thus may, for example, be obtained on channel B of the microscope 1 .
  • the initial image data is filtered in step 201 using a first non-linear diffusion filter.
  • this embodiment uses the geodesic mean curvature flow filter, described above in the introduction.
  • step 202 the filtered image data is split into sub images using the point inside ceils data as well as a maximum cell size parameter.
  • the maximum cell size parameter may be determined from the data itself or may be provided by a user. The idea here is to split the filtered image into sub images so that each sub image contains a complete cell but preferably no more than one complete cell. This is to speed up the later stages in the procedure on the basis that the later stages of the image processing procedure need only operate on a smaller image and hopefully only one cell in any one sub image.
  • step 203 segmentation of each of the sub images is carried out. This processing may be carried out in parallel with one CPU operating on each sub image. Again when performing the segmentation in step 203 the geodesic mean curvature flow filter as described above in the introduction is used.
  • step 204 the segmented image is output.
  • this consists of providing output image data which comprises both the original image and overlaid on that the identified cell features, in particular the identified cell boundaries.
  • the initial data provided as an input to the filter step 201 might be data corresponding to the image shown in Figure 8 and the final output data might be data corresponding to the image shown in Figure 15. More details of the steps of the process outlined in Figure 2 are given below.
  • FIG 3 shows the filtering process of step Figure 201 in more detail.
  • the filter processing consists of an application of the geodesic mean curvature flow filter defined above.
  • the filter processing it is necessary to determine the values to assign for various parameters which form part of the filter. Specifically these are the step size ⁇ , the value of the scalar of parameter K in the edge presenting function, and the variance ⁇ 2 which is to be used in the Gaussian filter.
  • the present embodiment includes an automatic determination of what values should be used for these parameters. This contrasts with a research or academic application of the geodesic mean curvature flow filter where these parameters may be chosen based on specialist knowledge, human intervention or extensive trial and error for a particular sample. In the present embodiment, a determination is made of these parameters in such a way that suitable parameters are chosen based upon characteristics of the sample which are determined by the system as pari of the image processing process.
  • the values of (V ijk ) and m(e? r j used in the coefficients of the filter may be set automatically in dependence on the physical calibration of the system - for example set to directly use the physical calibration values.
  • step 301 the step size ⁇ is calculated in dependence on the physical calibration of the image as defined above.
  • the step size ⁇ is calculated using the 2D XY calibration of a pixel multiplied by a fixed constant, i.e. (constant) x (pixeiCaiibrationX) x (pixelCalibrationY).
  • the fixed constant can be set to 1024, but alternative good values can also be determined empirically.
  • step 302 the image is padded by mirroring at the boundary and enforcing the appropriate boundary conditions.
  • step 303 an optimum value for K (as found in the edge preserving function defined in equation 2 above) is determined. This optimisation process will be explained in more detail further below.
  • Successive Over Relaxation is used to apply the geodesic mean curvature flow filter to the input data for that iteration.
  • the input data will be the initial image data as acquired by the microscope.
  • the input data will be the output of the previous iteration.
  • a determination is made as to whether the optimum value of K has stabilised. That is to say it is determined whether the optimum value for K for the current image is equal (with in a predetermined tolerance) to the respective optimum value of K for the previous iteration.
  • step 306 the filtering process is complete and at step 306 the filtered image data is returned for subsequent processing. That is to say it is fed into step 202 as shown in Figure 2.
  • step 303 the optimum value of K for the current image data is calculated, and in step 304 Successive Over Relaxation is again used to find the filtered result for the next iteration and another check made as to whether K has now stabilised in step 305, This process continues until K has stabilised.
  • the variance ⁇ 2 of the Gaussian filter used in the processes is set equal to two times the square of physical calibration of the image as defined above.
  • ⁇ 2 is set equal to 2x (pixelCalibrationX) x (pixelCalibrationY).
  • Figure 4 shows the segmentation process of step 203 of Figure 2 in more detail. The segmentation process has some similarities with the filtering process described immediately above. Note, however that here what we are talking about as the image, is the respective filtered sub-image corresponding to (hopefully) one ceil.
  • step 401 the step size ⁇ is calculated using the physical calibration of the image, as is the case for filtering. Further again in step 402 the image is padded by mirroring at the boundary, enforcing boundary conditions. In step 403 the optimum value of K is calculated for the respective sub-image. In step 404 background is subtracted from the sub image to remove poor detail and noise.
  • step 405 a seed surface is created to be fed into the geodesic mean curvature flow filter in due course.
  • step 406 a Gauss filtered copy of the sub-image is obtained and in step 407 the conduction coefficients are calculated with the optimised K. in effect this means determining the value of the function g(s) in respect of the initial filtered sub-image data.
  • the value of g(s) i.e. the conduction image, is the same for all iterations of the geodesic mean curvature flow filter.
  • this calculation of the conduction coefficients (step 407) needs to be carried out only once for each sub- image irrespective of the number of iterations later.
  • step 408 the seed surface is used to fill in any holes in the conduction image.
  • step 409 Successive Over Relaxation is used to operate on the seed surface to apply the geodesic mean curvature flow filter to segment the image.
  • this iterative process is carried out for a predetermined number of iterations and in step 410 it is determined whether the number of iterations has been completed. If not, the process returns to step 409 for application of another iteration.
  • the segmented sub-image may be returned in step 411 (that is fed into step 204 of the process of Figure 2) where the complete final image may be output to the user.
  • both the filtering process in step 201 and the segmentation process in step 203 involve calculating an optimum value of K for use in the edge preserving function g(s), defined above in equation (1 ).
  • the input data into the optimisation process will be the initial data in the first iteration and then the output of the previous iteration in each subsequent iteration.
  • the input will be the filtered data which is reached at the end of the filtering process 201 , as split into sub images in step 202,
  • the optimum value of K needs to be calculated for each sub image, The optimisation process will now be described in more detail with reference to Figure 5,
  • step 501 a Gauss filtered copy of the input data is obtained. It will be noted that this constitutes the input data
  • step 502 the conduction image for the current K is calculated. That is to the value of the function g(s) for each image data element (voxel/pixel) in the data is calculated.
  • step 503 the mean (x) of the values of the conduction image data is calculated.
  • step 504 the sum (y) of the conduction values which are greater than the conduction image mean (x) is calculated.
  • These elements in the image represent part of the image that should be changed by the filter i.e. noise. Due to the nature of the function g(s) the value y should decrease as K increases.
  • step 505 the sum (z) of conduction values which are smaller than the mean (x) is calculated.
  • the corresponding image data elements represent things in the image which we want to keep (details/edges) and the value z should increase as K increases.
  • a metric (m) is calculated which is equal to z-y (that is the sum of the value of the conduction values which are smaller than the mean minus the sum of the values of the conduction values which are greater than the mean).
  • the value of m will vary with K and will have a minimum. It has been determined by the applicants that using the value of K which gives this minimum value of m is an optimum value of K to choose in the image processing processes. Thus in step 507 it is determined whether the minimum m value has been found, if so this completes the optimisation process and the value of K that generated this minimum value of m is output in step 508. As will be appreciated, this optimum value of K can then be used in the filtering step concerned or the segmentation processes as appropriate.
  • step 507 it is determined that the value of minimum m has not been reached, then in step 509 the value of K is doubled and the process returns to step 502 where the conduction image for the new value of K is calculated. The process is repeated to find the corresponding metric m for that value of K until the minimum value of m is identified.
  • the range of values that g(s) can take is 0 to 1 where 0 will be the value where a very strong edge/feature is found and 1 will be the value where the data in that region is featureless.
  • This function then operates as an edge preserving function since the remainder of the filter will have no effect where the value of g is 0 and will have greatest effect where the value of g is 1 .
  • different forms of the function g could be used where they have a similar property. That is a property where the value of the function is small and tending towards 0 for prominent features in the image data and tending towards some maximum value where the image data represents noise or background.
  • the segmentation process comprises a step 404 of subtracting background from the image to remove poor detail and noise. This process is described below in more detail with reference to Figure 6. This process starts in step 801 with calculating the conduction image with the optimum value of K determined in step 403. In sub-process 602 each image data element
  • step 603 a value for the feature mean (z/z count) is calculated.
  • step 604 any pixel in the real filtered image data, as can be termed the source, which has a value which is smaller than the feature average (z/z count) is set to zero. Thus only pixels in the real filtered image data which are greater than the average are preserved. This has the effect of suppressing the background.
  • step 605 the cleaned image is returned.
  • a seed surface is created for use as an input into the main segmentation step using the geodesic mean curvature flow filter in step 409.
  • the process of creating the seed surface is described with more detail below, with reference to Figure 7.
  • step 701 the conduction image with the optimal value of k is calculated or read out of storage if previously calculated as is the case in the present embodiment.
  • step 702 pixels/voxels in the conduction image with a value greater than half are identified as edge features.
  • the range of possible values for each pixel/voxel in the conduction image is 0 to 1 due to the behaviour of the function g.
  • the image data is 3D image data.
  • it might be necessary or appropriate to generate a 2D seed surface If that is the case then the next step described in 703 can be omitted.
  • a centre of the cell being considered in the appropriate sub image is located in z by scanning from a seed point in both directions looking for front and back edges.
  • the initial seed point may be a seed point indicated by a user or indicated by image data acquired on the second channel, as mentioned above.
  • the seed point for example, might be the location of the nucleus in the cell.
  • the initial seed point may be altered in step 703. The objective is to find the maximum length of axis in the z direction between identified front and back edges. Once this axis is located then its mid-point is considered the centre of the cell in the z direction.
  • step 704 the data is scanned in intervals through 360° around the z axis from the z centre to locate a longest major axis in the x-y plane. Again the position of the z axis around which the 360° scanning takes place can be moved in the x-y plane until the best orientation and length of a longest major axis can be found.
  • step 705 a longest minor axis is identified. This is carried out by scanning orthogonal to the z axis and major axis at different points along the major axis again looking for the longest spacing between conduction image values identified as feature edges in step 702.
  • step 708 the orientation and lengths of the z axis, longest major axis, and minor axis are used to draw an ellipsoid representing an initial seed surface
  • step 707 the ellipsoid is multiplied by the conduction image to inherent edge shapes.
  • features in the conduction image are zero and thus this multiplication serves to cut out portions of the seed surface which extend into and beyond the edge features.
  • step 709 the seed surface is returned to the main process.
  • the filtering and segmentation processes as well as the sub- process of those processes described above may ail be used together to particularly good effect.
  • only some of the processes or sub-processes may be used and useful benefits still realised.
  • generation of the seed surface might be carried out without the particular filtering described above and/or without the optimisation of K or vice versa and so on.
  • FIGS 8 to 15 are images showing an example of the application of the above described processes/use of the above described apparatus.
  • Figure 8 shows an initial image which shows two cells joined together at a boundary. Although not clearly visible in the image, this is a two channel image with a membrane stain image showing up most clearly as the light colour boundary of the ceils and a dark nucleus stain image which can just about be made out in the centre of the cells.
  • Figure 9 shows the image of Figure 8 after the effect of filtering step 201 of Figure 2 and the splitting into sub-images step 202 of Figure 2 as well as after the background subtraction/cleaning process of step 404 in the segmentation process.
  • Figure 10 shows the same portion of the image shown in Figure 9, but
  • FIG. 11 schematically shows part of the scanning processes carried out in the generation of a seed surface in steps 704 and 705 of the process shown in Figure 7.
  • the centre in z has already been determined so we are just looking at the x, y plane and the diamond shows the identified centre in the x, y plane.
  • the lines represent the 360° scanning around the centre diamond to identify the longest major axis and orthogonal to this, scans at different point along this longest major axis to identify a longest minor axis.
  • Figure 12 shows an ellipsoid calculated using the longest z axis, the longest major axis and the longest minor axis calculated in accordance with steps 704, 705 and 706.
  • Figure 13 shows the result of step 707 where the ellipsoid seed surface shown in Figure 12 has been multiplied by the conduction image. This has a cookie cutter type effect on the surface and cuts out spurious parts of the surface which are not needed.
  • the image shown in Figure 13 also represents the seed surface after any holes in the seed surface and seed surface outside the main surface have been discarded although this is not immediately apparent in the image itself.
  • Figure 14 represents the result of the segmentation process as implied to the seed surface. That is to say after the seed surface has been operated on by the geodesic mean curvature flow filter in step 409 for the specified number of iterations.
  • Figure 15 shows the final output image which is the original image as shown in Figure 8 and added to this, the segmented image edge as found by the filtering and segmentation process.
  • the cell edge c is shown as a dark solid line overlaid on the original image.
  • the present embodiment of the invention makes use of the geodesic mean curvature low filter for the filtering process and the segmentation process.
  • similar results could be achieved using other non-linear diffusion filters such as those mentioned in the introduction.
  • the optimisation process described above in relation to Figure 5 could be used in such cases as could the process for determining a seed surface and cleaning the image and so on.
  • the invention may be embodied in a method of processing image data, a computer program comprising code portions arranged to carry out the method of image processing, a computer programed with such a program or a physical machine readable data carrier carrying such a computer program.
  • an image acquiring and processing apparatus comprising both a microscope or a similar piece of equipment for acquiring image data and a processing unit for processing that image data and also optionally an output device such as a display for displaying the results.
  • the physical machine readable data carrier may, for example, comprise a hard disk, a CD ROM, a DVD, a flash memory drive, or so on.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

A method for identifying cell features, such as cell boundaries, in initial image data representing a sample of organic material. The method comprising the steps of: filtering the initial image data using a first nonlinear 5 diffusion filter; segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features; wherein using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter. The method may comprise the further step of optimising the scalar parameter. The method may also include particular techniques for cleaning and generating a seed surface for further processing.

Description

METHOD FOR IDENTIFYING CELL FEATURES
This invention relates to identifying cell features, for example cell boundaries, in initial image data representing a sample of organic material.
In particular it relates to methods and apparatus for image processing of image data using partial differential equation (PDE) methods.
In image processing techniques there are often separate processes for filtering data to, for example, enhance edges and features whilst reducing
noise/background and image segmentation techniques which are typically used to identify or locate objects and boundaries and so on in images.
Image processing techniques both for filtering in the above sense and
segmentation can make use of partial differential equation techniques. Both of these steps may make use of partial differential equation based filters.
It has been found that non-linear diffusion equations and the solving of these equations using numerical methods can give rise to powerful image processing techniques for filtering images and segmentation of images in the case of samples of organic material, in particular such non-linear diffusion filters are known to be useful for identifying features such as cell boundaries in image data representing samples of organic material. Such techniques are, for example, described in the following papers which are incorporated herein by reference:
"4D EMBRYOGENESIS IMAGE ANALYSIS USING PDE METHODS OF IMAGE PROCESSING", Paul Bourgine, Robert Cunderlik, Oiga Drblikova-Stasova, Karol Mikula, Nadine Peyrieras, Mariana Remesikova, Barbara Rizzi, and Aiessandro Sarti, KYBERNET1KA— VOLUME 46 (2010), NUMBER 2, PAGES 226-259 "Segmentation of 3D cell membrane images by PDE methods and its
applications", K. Mikula a, N. Peyrieras b, M. Remesikova a*, O. Stasova3, Computers in Biology and Medicine 41 (2011 ) 326-339
"Scale-space and edge detection using anisotropic diffusion", Pietro Perona and Jitendra Malik "3-D Zebrafish Embryo Image Filtering by Nonlinear Partial Differential Equations", Barbara Rizzi, Matteo Campana, Cecilia Zanella, Camilo Melani, Robert
Cunderlik, Zuzana Kriva, Paul Bourgine, Karol Mikula, Nadine Peyrieras and Alessandro Sarti,
"Subjective Surfaces: A Geometric Model for Boundary Completion", A, SARTI, R. MALLAD1 AND J.A. SETH!AN, International Journal of Computer Vision 46(3), 201 -221 , 2002
"Segmentation and analysis of 3D zebrafish cell image data", Karol Mikula, Mariana Remesikova and Olga Stasova, Nadine Peyrieras
"3D early embryogenesis image filtering by nonlinear partial differential equations", Z, Kriva a, K. Mikula a, N. Peyrieras b, B. Rizzi °, A. Sarti c'*, O, Stasov a
A number of specific non-linear partial differential equation based filters have been proposed to be used in problems such as finding cell boundaries. These include the Perona Malik (PM) filter, the slowed mean curvature flow (SMCF) and the geodesic mean curvature flow (GMCF) filter. These last two are examples of curvature driven non-linear diffusion filtering equations. These filters and their corresponding discrete forms which may be solved by numerical methods such as successive over relaxation (SOR) are described for example in "3D early embryogenesis image filtering by nonlinear partial differential equations" - Z Kriva et al, as mentioned above, in their sections 2 and 3.2.
Each of these filters includes an edge preserving function g defined as follows g(s) = T - , K≥ 0 (eqn l )
Where K is a scalar parameter and s is the Gaussian gradient iVGa * u| of the image data u. This edge preserving function can be used during filtering and segmentation and has the effect of ensuring that the filter has maximum effect on those parts of the image which should be altered by the filter i.e. noise/background and a minimum effect on those parts of the image which represent features/edges. Where the value of g(s) is calculated for a whole set of image data the resulting processed set of data can be termed a conduction image. The geodesic mean curvature flow equation can be written as follows, ut - I Vu\ V. ig(\ VGa * u \)— ) = 0, (eqn 2) where u is the input image data and g is the edge preserving function mentioned
In order to solve this equation, numerical methods are used which first involves finding a discrete form of the geodesic mean curvature flow equation given above. This is done first by a semi-implicit time discretisation which yields the equation below
Figure imgf000005_0001
Where τ represents a time step defined by total time T/N where N steps are used in the filtering process.
This is followed by a finite volume space discretisation which yields the following equation
\Va.r , ,n _ „,ri-l
/ j ^ijk ui+p,j+q,k+r uijk
\V\ + \q\ + \r\≤l
i— 1, ... , N1 / = 1, ... , N2, k— 1, N3. (eqn where the discrete geodesic mean curvature flow filter is given by solving this equation for n=1 to N with coefficients defined as follows
Α % = - (τ τη (e ) g^ Qjffj /(m(VtJk)
\p\ + \q \ + jrj = 1, xf?I 6 δίϊ
= 1 + ∑!ρ Μί? ί +Μ,α = O, otherwise. (eqn 5)
In this Vijk represents each finite volume from the finite volume space
discretisation, volume of Vtjk. The planar sides of Vijk are denoted by The edge connecting the centre of Vijk
Figure imgf000005_0002
and the centre of its neighbour V r i+vp,iiw+q_k+rr hhaass aa lleennggtthh ¾ h1-^r, whilst pqr;n~l
Q ijk l e2 + \ V'P^r u and
Figure imgf000006_0001
where e2 is a regularisation parameter, typically ε «1 , and
This set of equations may be solved using successive over relaxation (SOR) techniques as alluded to above. That is to say the eqn 4 above and its coefficients (eqn 5) represent a set of linear simultaneous equations which may be solved by the standard matrix mathematic technique of SOR.
If it is desired to use the smooth mean curvature flow filter, this may be done using the same eqn 4 but with coefficients given as below
Figure imgf000006_0002
and if it is desired to use the discrete Perona Malik filter then again this may be done by using eqn 4 above and the coefficients below
Ip! + \ q \ + M = i, xf,q k r e so.
4
Figure imgf000006_0003
~ ΐ> = °- otherwise. (eqn 7)
When the geodesic mean curvature flow filter is used for filtering as shown in eqn 4 and eqn 5 the edge preserving function g(s) is calculated using un"\ i.e. the data from the last iteration, such that g(s) needs to be calculated for each iteration.
On the other hand, where the geodesic mean curvature flow equation is used for segmentation g(s) is calculated using u° i.e. an initial set of image data. Thus a modified form of the equations for segmentation using geodesic mean curvature flow filter is as follows
Figure imgf000007_0001
and modified coefficients for use with eqn 4 are as follows
Figure imgf000007_0002
This means that in geodesic mean curvature flow segmentation, g(s) only needs to be calculated once.
Whilst the above filters have been found to be effective for filtering images and finding objects such as boundaries in those images where the precise nature of the initial images is known and considerable manual intervention is used to set up the equations for the filter, difficulties can arise when trying to use image processing based on the above filters in practical situations.
This is because to work effectively the above filters rely on the setting of various parameters, such as the edge preserving function scalar parameter K, the standard deviation σ of the Gaussian filter, and the time step size τ and can take an extremely long time to run in the case of some sets of input data.
Thus the present invention is aimed at providing methods and apparatus which can make use of the above type of image processing techniques and yield useful results efficiently, in practical periods, for a range of different types of initial image data.
According to a first aspect of the present invention there is provided a method for identifying ceil features in initial image data representing a sample of organic material comprising the steps of:
filtering the initial image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil features;
wherein using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the method comprises the step of optimising the scalar parameter by the steps of:
selecting a trial value for the parameter;
and with the parameter set to the trial value,
determining the value of the function (x) for each image data element in a selected set of input image data;
determining the mean value of the function (x) across all of the image data elements in said selected set;
determining a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
determining a second sum (z) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is less than the mean (x); and
determining as a metric the value of the second sum (z) minus the value of the first sum (y);
repeating the above determining steps a plurality of times with respective different trial values of the parameter;
determining as an optimum value of said parameter, the trial value which gives a minimum value for the metric;
and using said optimum value for said parameter when using the respective nonlinear diffusion filter.
The ceil features may be ceil boundaries.
Successive trial values of the parameter may be double the respective previous trial value. The first trial value may be chosen to be 1/2.
The edge preserving function g can be considered to have the form g(s;K) where s is an input to the function which is dependent on the image data and K represents said scalar parameter. The edge preserving function g may be as follows: d(s) - 7T~7- , K≥ 0 (eqn 1)
The Gaussian gradient of the image data |VGa * u| where u is the selected set of input image data may be used as the input s into the edge preserving function - i.e. in such a case s = |VGc * u|
The variance (standard deviation2, or σ2) of the Gaussian function used in determining the Gaussian gradient may be set in dependence on the physical calibration of the initial image data, that is the size in the sample which each image data element represents.
Where an edge preserving function is applied to each image data element in a selected set of input image data, such that a value for the function in respect of each image data element is calculated, the resulting set of values of the function can be termed a conduction image.
The edge preserving function may be chosen so that when used to operate on each image data element in a selected set of input image data, the function generates resulting output values which are within a predetermined range.
Preferably the range is from 0 to a maximum value. The maximum value may be 1 .
In such a case, ideally image data elements identified as features/edges by the function will have values of 0 or close to 0 whilst ideally image data elements identified as background/noise will have values of the maximum value or close to the maximum value.
This results in the remainder of the respective nonlinear filter acting more strongly on background compared to features/edges.
Whether the values of the function behave ideally depends on the optimisation of the scalar parameter.
Using the first nonlinear filter may comprise using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter. The step of filtering the initial image data may comprise iterative application of the first nonlinear diffusion filter. In such a case, the step of optimising the scalar parameter may be carried out for each application of the first nonlinear diffusion filter.
The initial image data may be used as the input image data for a first application of the first nonlinear diffusion filter. Correspondingly the selected set of input image data used in the step of optimising the scalar parameter, for use in the first application of the first nonlinear diffusion filter, may be the initial image data.
The filtered image data resulting from one application of the first nonlinear diffusion filter may be used as the input image data for a next application of the first nonlinear diffusion filter. Correspondingly the selected set of input image data used in the step of optimising the scalar parameter where this is carried out for each application of the first nonlinear diffusion filter, may be the input image data which is being used for the respective application of the first nonlinear diffusion filter.
Where the step of filtering the initial image data comprises iterative application of the first nonlinear diffusion filter, the method may comprise the step of determining how many iterations should be performed by monitoring the value of the optimised scalar parameter for each iteration and ceasing further iterations once determining that the value of the optimised parameter for successive iterations are equal to one another within a predetermined tolerance.
Using the second nonlinear filter may comprise using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter.
The step of segmenting the filtered image data may comprise iterative application of the second nonlinear diffusion filter. However typically the step of optimising the scalar parameter will be carried out only once during the step of segmenting the filtered image data. This is because the same values of output of the edge preserving function for each data element may be used in multiple iterations during segmentation. The method may comprise performing a predetermined number of iterative applications of the second nonlinear diffusion filter.
The predetermined number of iterations may be set by a user, or factory set. Alternatively the method may comprise determining the number of iterative applications of the second nonlinear diffusion filter which are used in dependence of characteristics of the input or output data.
Both using the first nonlinear filter and using the second nonlinear filter may comprise using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter. The same edge preserving function may be used in using both of the filters. However, optimising of the scalar parameter is likely to be carried out separately since the input data for the optimisations is liable to be different.
At least one of the first and second nonlinear filters may comprise a curvature filter. At least one of the first and second nonlinear filters may comprise one of: a Perona-Malik filter, a slowed mean curvature flow filter and a geodesic mean curvature flow filter.
Preferably both filters are geodesic mean curvature flow filters.
Preferably one or both of the first and second nonlinear filters is treated as a discrete filter.
The discrete filter may be dependent on a time step size, which is defined by the total time for filtering divided by the number of filtering steps. The time step size may be determined based on the physical calibration of the image, that is the size in the sample which each image data element represents.
Successive Over Relaxation (SOR) techniques may be used in application of the first and/or second nonlinear filter.
The method may comprise the step of splitting the filtered image data into sections prior to the segmentation step and performing the step of segmentation on each section.
The sections may be selected with the aim of each section including a complete cell, and preferably only a single complete cell. Splitting the filtered image into sections tends to reduce the processing time required to complete segmentation. The data may be split into sections on the basis of a point identified within each cell and a maximum cell size parameter.
Thus each section may comprise image data within a region defined by said point and a boundary spaced from said point by a distance which is dependent on the maximum cell size parameter. Such a section might, for example, be a cube, cuboid or a sphere. In the case of 2D data the section might, for example, be a square, rectangle or circle.
The maximum cell size parameter may be received from user input or determined from the data.
The point identified within each ceil may be received from user input or determined from internal feature image data. The internal feature image data may be provided separately from the main image data.
The internal feature image data may be representative of the nuclei of ceils, whereas the main image data is representative of cell walls. As is well understood such separation of image data may be achieved using different stains on the sample.
The main image data may be received on one channel of an inspecting microscope and the internal feature image data may be received on another channel of the inspecting microscope.
In some embodiments segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image feature rather than background;
calculating the value of the function for each image data element;
calculating a mean value for those image data elements for which the respective value of the function falls within the first part of the predetermined range;
setting to zero in the filtered image data the value of any image data element which is less than said mean value.
According to a second aspect of the present invention there is provided a method for identifying cell features in image data representing a sample of organic material comprising the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features;
wherein segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image feature rather than background;
calculating the value of the function for each image data element;
calculating a mean value for those image data elements for which the respective value of the function falls within the first part of the predetermined range; setting to zero in the filtered image data the value of any image data element which is less than said mean value.
The ceil features may be ceil boundaries.
In some embodiments segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image edge feature;
calculating the value of the function for each image data element in the filtered image data and identifying those image data elements representing an image edge feature to generate a processed image data set;
scanning, in intervals through 360 degrees around a selected point in a selected plane, the processed image data set to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest major axis for the seed surface;
having identified the longest major axis, scanning in said plane and orthogonally to the longest major axis at points along its length to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest minor axis for the seed surface;
generating one of an ellipse and an ellipsoid to represent the seed surface using the longest major axis and the longest minor axis.
According to a third aspect of the present invention there is provided a method for identifying ceil boundaries in image data representing a sample of organic material comprising the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil boundaries;
wherein segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image edge feature;
calculating the value of the function for each image data element in the filtered image data and identifying those image data elements representing an image edge feature to generate a processed image data set;
scanning, in intervals through 360 degrees around a selected point in a selected plane, the processed image data set to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest major axis for the seed surface;
having identified the longest major axis, scanning in said plane and orthogonally to the longest major axis at points along its length to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest minor axis for the seed surface;
generating one of an ellipse and an ellipsoid to represent the seed surface using the longest major axis and the longest minor axis.
This technique may be used for 2D image data and 3D image data.
Where the image data is 3D image data, the method may comprise determining the selected point and the selected plane by selecting an initial seed point in a cell and scanning in two directions in a first dimension away from the initial seed point the processed image data set to identify respective cell edges, repeating the scanning process for other initial seed points and based on a longest determined spacing between identified cell edges identifying a first dimension axis for the seed surface, then choosing the selected point to be a midpoint of the first dimension axis of the ceil and choosing the selected plane to be that plane which is orthogonal to the first dimension axis and contains the selected point, and the step of generating one of an ellipse and an ellipsoid to represent the seed surface comprises generating an ellipsoid using the first dimension axis, the longest major axis and the longest minor axis.
The method may comprise the further step of modifying the seed surface using the conduction image,
Modifying the seed surface may comprise inheriting edge shapes from the conduction image. The seed surface may be multiplied by the conduction image. Where feature areas in the conduction image have the value of zero, this sets to zero portions of the seed surface that extend into feature areas and in effect cuts away these portions of the seed surface.
Modifying the seed surface may comprise filling any holes in the seed surface.
Modifying the seed surface may comprise discarding any seed surface outside of a respective main seed surface.
The step of segmenting the filtered image data may comprise using the seed surface to fill any holes in the conduction image.
According to yet a further aspect of the present invention there is provided a method for identifying ceil boundaries in image data representing a sample of organic material comprising the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil boundaries;
wherein segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image edge feature;
calculating the value of the function for each image data element in the filtered image data and identifying those image data elements representing an image edge feature to generate a processed image data set;
determining a location as a centre of a cell in a first dimension by selecting an initial seed point in a ceil and scanning in two directions in the first dimension away from the initial seed point the processed image data set to identify respective cell edges, repeating the scanning process for other initial seed points and selecting a location as a centre of the ceil based on the results of the scanning processes and calculating a first dimension axis for the seed surface; having identified a centre of a ceil in said first dimension, scanning, in intervals through 360 degrees around said centre in a plane orthogonal to the first dimension, the processed image data set to identify respective ceil edges and identify a longest major axis for the seed surface;
having identified the longest major axis, scanning in said plane and orthogonally to the longest major axis at points along its length to identify respective cell edges and identify a longest minor axis for the seed surface;
using the first dimension axis, the longest major axis and the longest minor axis to generate an ellipsoid representing the seed surface.
According to a further aspect of the invention there is provided a computer program comprising code portions which when loaded and run on a computer cause the computer to carry out the steps of any one of the methods defined above.
According to a further aspect of the invention there is provided a machine readable data carrier carrying a computer program as defined above.
According to a further aspect of the present invention there is provided a computer arranged under the control of software to carry out the steps of any one of the methods defined above.
According to a further aspect of the present invention there is provided image processing apparatus comprising a processing unit arranged under the control of software to carry out the steps of any one of the methods defined above. According to a further aspect of the present invention there is provided image acquiring and processing apparatus comprising a microscope for acquiring image data of a sample, and a processing unit arranged under the control of software to carry out the steps of any one of the methods defined above in respect of the acquired image data.
According to a further aspect of the present invention there is provided a computer arranged under the control of software to identify ceil features in initial image data representing a sample of organic material by carrying out the steps of:
filtering the initial image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil features;
wherein using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the method comprises the step of optimising the scalar parameter by the steps of:
selecting a trial value for the parameter;
and with the parameter set to the trial value,
determining the value of the function (x) for each image data element in a selected set of input image data;
determining the mean value of the function (x) across all of the image data elements in said selected set;
determining a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
determining a second sum (z) which is a sum of the values of the function
(x) for each image data element in the selected set where the value of the function (x) is less than the mean (x); and
determining as a metric the value of the second sum (z) minus the value of the first sum (y);
repeating the above determining steps a plurality of times with respective different trial values of the parameter;
determining as an optimum value of said parameter, the trial value which gives a minimum value for the metric;
and using said optimum value for said parameter when using the respective nonlinear diffusion filter.
According to a further aspect of the present invention there is provided image processing apparatus comprising a processing unit arranged under the control of software to identify cell features in initial image data representing a sample of organic material by carrying out the steps of:
filtering the initial image data using a first nonlinear diffusion filter; segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil features;
wherein using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the method comprises the step of optimising the scalar parameter by the steps of:
selecting a trial value for the parameter;
and with the parameter set to the trial value,
determining the value of the function (x) for each image data element in a selected set of input image data;
determining the mean value of the function (x) across all of the image data elements in said selected set;
determining a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
determining a second sum (z) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is less than the mean (x); and
determining as a metric the value of the second sum (z) minus the value of the first sum (y);
repeating the above determining steps a plurality of times with respective different trial values of the parameter;
determining as an optimum value of said parameter, the trial value which gives a minimum value for the metric;
and using said optimum value for said parameter when using the respective nonlinear diffusion filter.
According to a further aspect of the present invention there is provided image processing apparatus arranged for identifying cell features in initial image data representing a sample of organic material, the apparatus comprising :
a filtering module for filtering the initial image data using a first nonlinear diffusion filter;
a segmentation module for segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features; wherein using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the apparatus comprises an optimising module for optimising the scalar parameter on the basis of:
selecting a trial value for the parameter;
and with the parameter set to the trial value,
determining the value of the function (x) for each image data element in a selected set of input image data;
determining the mean value of the function (x) across ail of the image data elements in said selected set;
determining a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
determining a second sum (z) which is a sum of the values of the function
(x) for each image data element in the selected set where the value of the function (x) is less than the mean (x); and
determining as a metric the value of the second sum (z) minus the value of the first sum (y);
repeating the above determining steps a plurality of times with respective different trial values of the parameter;
determining as an optimum value of said parameter, the trial value which gives a minimum value for the metric;
and using said optimum value for said parameter when using the respective nonlinear diffusion filter.
According to a further aspect of the present invention there is provided a computer arranged under the control of software to identify cell features in initial image data representing a sample of organic material by carrying out the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features;
wherein segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image feature rather than background;
calculating the value of the function for each image data element;
calculating a mean value of the function for those image data elements for which the respective value of the function falls within the first part of the predetermined range;
setting to zero in the filtered image data the value of any image data element for which the value of the function is less than said mean value.
According to a further aspect of the present invention there is provided image processing apparatus comprising a processing unit arranged under the control of software to identify ceil features in initial image data representing a sample of organic material by carrying out the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil features;
wherein segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image feature rather than background;
calculating the value of the function for each image data element;
calculating a mean value of the function for those image data elements for which the respective value of the function fails within the first part of the predetermined range;
setting to zero in the filtered image data the value of any image data element for which the value of the function is less than said mean value.
According to a further aspect of the present invention there is provided image processing apparatus arranged for identifying cell features in initial image data representing a sample of organic material, the apparatus comprising:
a filtering module for filtering the image data using a first nonlinear diffusion filter; a segmentation module for segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features;
wherein segmenting the filtered image data includes cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning comprises: using an edge preserving function which is operable on each image data element in the filtered image and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image feature rather than background;
calculating the value of the function for each image data element;
calculating a mean value of the function for those image data elements for which the respective value of the function fails within the first part of the predetermined range;
setting to zero in the filtered image data the value of any image data element for which the value of the function is less than said mean value.
According to a further aspect of the present invention there is provided a computer arranged under the control of software to identify ceil features in initial image data representing a sample of organic material by carrying out the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil features;
wherein segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image edge feature;
calculating the value of the function for each image data element in the filtered image data and identifying those image data elements representing an image edge feature to generate a processed image data set;
scanning, in intervals through 360 degrees around a selected point in a selected plane, the processed image data set to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest major axis for the seed surface;
having identified the longest major axis, scanning in said plane and orthogonally to the longest major axis at points along its length to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest minor axis for the seed surface;
generating one of an ellipse and an ellipsoid to represent the seed surface using the longest major axis and the longest minor axis.
According to a further aspect of the present invention there is provided image processing apparatus comprising a processing unit arranged under the control of software to identify cell features in initial image data representing a sample of organic material by carrying out the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features;
wherein segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image edge feature;
calculating the value of the function for each image data element in the filtered image data and identifying those image data elements representing an image edge feature to generate a processed image data set;
scanning, in intervals through 360 degrees around a selected point in a selected plane, the processed image data set to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest major axis for the seed surface;
having identified the longest major axis, scanning in said plane and orthogonally to the longest major axis at points along its length to identify respective ceil edges and based on a longest determined spacing between identified cell edges, identify a longest minor axis for the seed surface;
generating one of an ellipse and an ellipsoid to represent the seed surface using the longest major axis and the longest minor axis.
According to a further aspect of the present invention there is provided image processing apparatus arranged for identifying cell features in initial image data representing a sample of organic material, the apparatus comprising:
a filtering module for filtering the image data using a first nonlinear diffusion filter; a segmentation module for segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell features;
wherein segmenting the filtered image data comprises, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which calculating a seed surface comprises: using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image edge feature;
calculating the value of the function for each image data element in the filtered image data and identifying those image data elements representing an image edge feature to generate a processed image data set;
scanning, in intervals through 360 degrees around a selected point in a selected plane, the processed image data set to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest major axis for the seed surface;
having identified the longest major axis, scanning in said plane and orthogonally to the longest major axis at points along its length to identify respective ceil edges and based on a longest determined spacing between identified cell edges, identify a longest minor axis for the seed surface;
generating one of an ellipse and an ellipsoid to represent the seed surface using the longest major axis and the longest minor axis.
The computer or apparatus defined in anyone of the aspects above may comprise a display for displaying processed images to a user.
The optional sub features following the first to third aspects of the invention are equally applicable as optional sub features for the remaining aspects of the invention and may be written with any necessary changes in wording. They are not repeated here in full in the interest of brevity.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 schematically shows an image acquiring and processing apparatus;
Figure 2 is a flow chart showing a process for identifying cell features in initial image data;
Figure 3 is a flow chart showing more details of a filtering step of the process shown in Figure 2;
Figure 4 is a flow chart showing more details of a segmentation step of the process shown in Figure 2;
Figure 5 is a flow chart showing a process for optimising a parameter used in the filtering step and/or segmentation step of the process shown in Figure 2; Figure 6 is a flow chart showing more detail of a background subtraction step of the segmentation process shown in Figure 4;
Figure 7 is a flow chart showing more detail of a step of creating a seed surface as part of the segmentation process shown in Figure 4;
Figure 8 shows an example initial image of a sample of organic material;
Figure 9 shows the results after filtering the image shown in Figure 8, splitting it into a sub image and performing a background subtraction in accordance with an embodiment of the present invention; Figure 10 shows a conduction image corresponding to the image shown in Figure 9;
Figure 11 shows the conduction image of Figure 10 and schematically illustrates initial stages of calculating a seed surface for the cell shown in the conduction image;
Figure 12 schematically shows an initial seed surface developed as part of an embodiment of the present invention;
Figure 13 shows a modified seed surface, based on that shown in Figure 12, but after having been multiplied by the conduction image as shown in Figure 10;
Figure 14 shows the seed surface shown in Figure 13 after segmentation carried out in accordance with an embodiment of the present invention; and
Figure 15 shows the initial image as shown in Figure 8 overlaid with a segmented ceil edge determined by a process embodying the present invention and corresponding to the segmented surface shown in Figure 14 - this being an example output image at the end of the image processing steps to which the present application is directed.
Figure 1 schematically shows image acquiring and processing apparatus which can be used for acquiring image data, processing that image data and displaying the resulting processed image data to a user. It is possible that all of these steps may be performed with one set of apparatus as depicted in Figure 1 . However it will equally be appreciated that it would be possible to acquire the image data separately from any processing activity and this again could be separate from display of the resulting data. The present invention relates to image acquiring and processing apparatus as shown in Figure 1 as well as processing apparatus per se and methods for acquiring, processing and displaying image data, processing image data, and processing and displaying image data.
The image acquiring and processing apparatus shown in Figure 1 comprises a microscope 1 for acquiring image data from a sampie 2. The microscope 1 may be arranged for acquiring image data on two different channels. Thus, for example, the microscope 1 may be a confocai microscope and may be arranged for obtaining membrane stain image data on a first channel A and nucleus stain image data on a second channel B. The microscope 1 is connected to an image processing unit 3 which in turn in this embodiment is connected to a display 4 for displaying images to a user. The processing unit 3 comprises memory, a processor and a storage device (not shown). As a whole, the image acquiring and processing apparatus may display to the user initial images based on initial image data gathered by the microscope 1 and/or processed images based on processed image data calculated by the image processing unit 3 from the initial image data. The present apparatus and processes are directed at identifying cell features in initial image data and providing output image data which indicates identified cell features. In particular in the present embodiment, the apparatus and methods are directed at identifying cell boundaries in initial image data and outputting processed image data which identifies the cell boundaries. This can be useful for the user in terms of simply having the cell boundaries identified and indicated or useful for further calculations or analysis.
As will be described in more detail below, Figures 8-1 5 show example images which are relevant to the present techniques. At this time it is mentioned simply that Figure 8 shows an example of an initial image which can be seen to be made up of two ceils bordering one another, and Figure 1 5 shows a final output image in which a cell boundary has been identified and laid over the top of the initial image as a dark solid line. At a basic level the current apparatus and methods are provided for automatically drawing on such cell boundaries to initial image data. Of course in practical reality there may be many more ceils than two and the sample may be 3D sample extending a number of ceils in each direction. The present methods and apparatus can be used to draw 3D (or 2D) cell boundaries for all of the cells (within reason) in a bulk sample. It will be appreciated that initial image data is made up of a number of pixels (2D) or a number of voxels (3D). In the specification we used the expression "image data element" as a more general term which may represent a pixel or a voxel or perhaps a group of pixels of voxels in at least some instances. Further at some places in the description and/or drawings the word pixel is used for convenience where it is clear that the image data element might in fact be a voxel or a pixel.
It will be appreciated that when image data is acquired by a microscope 1 (or any similar instrument) there will be a finite size to each pixel (voxel) in the data. This will be determined by the sensor in the microscope or similar and the optical and/or mechanical systems in the microscope or similar. Thus each set of image data can be considered to have a physical calibration which represents the size of a pixel having gone through the optical system. As an example in the detector one pixel may be 6.7 μνη in width and 6.7 μηι in height. If the optics lead to a 100x magnification this means that the size in the sample represented by one pixel may be 0.067 μίπ x 0.067 μηι . Typically a mechanical drive will be used for moving the focus in the third dimension and the step size in this direction might be 0.1 μηι leading to an overall physical calibration of 0.067 μιη x 0.067 μηι x 0.1 μηι,
Figure 2 shows, at a general level, the present method for identifying ceil features in initial image data and outputting image data which indicates identified ceil features. Of course the apparatus of Figure 1 can be programmed to carry out this method.
The main inputs to this process in the present embodiment are initial image data, particularly membrane stain image data 5 as may be acquired on channel A by the microscope 1 and data concerning points inside cells 6 which may be provided by the user or, for example, be indicative of nuclei inside the ceils and thus may, for example, be obtained on channel B of the microscope 1 .
The initial image data is filtered in step 201 using a first non-linear diffusion filter. In particular, this embodiment uses the geodesic mean curvature flow filter, described above in the introduction.
In step 202 the filtered image data is split into sub images using the point inside ceils data as well as a maximum cell size parameter. The maximum cell size parameter may be determined from the data itself or may be provided by a user. The idea here is to split the filtered image into sub images so that each sub image contains a complete cell but preferably no more than one complete cell. This is to speed up the later stages in the procedure on the basis that the later stages of the image processing procedure need only operate on a smaller image and hopefully only one cell in any one sub image.
In step 203 segmentation of each of the sub images is carried out. This processing may be carried out in parallel with one CPU operating on each sub image. Again when performing the segmentation in step 203 the geodesic mean curvature flow filter as described above in the introduction is used.
In step 204 the segmented image is output. In this embodiment this consists of providing output image data which comprises both the original image and overlaid on that the identified cell features, in particular the identified cell boundaries. Thus for example, the initial data provided as an input to the filter step 201 might be data corresponding to the image shown in Figure 8 and the final output data might be data corresponding to the image shown in Figure 15. More details of the steps of the process outlined in Figure 2 are given below.
Figure 3 shows the filtering process of step Figure 201 in more detail. As mentioned above the filter processing consists of an application of the geodesic mean curvature flow filter defined above. However in order to apply this filter it is necessary to determine the values to assign for various parameters which form part of the filter. Specifically these are the step size τ, the value of the scalar of parameter K in the edge presenting function, and the variance σ2 which is to be used in the Gaussian filter. The present embodiment includes an automatic determination of what values should be used for these parameters. This contrasts with a research or academic application of the geodesic mean curvature flow filter where these parameters may be chosen based on specialist knowledge, human intervention or extensive trial and error for a particular sample. In the present embodiment, a determination is made of these parameters in such a way that suitable parameters are chosen based upon characteristics of the sample which are determined by the system as pari of the image processing process.
Further in the present systems the values of (Vijk) and m(e? r j used in the coefficients of the filter may be set automatically in dependence on the physical calibration of the system - for example set to directly use the physical calibration values.
In step 301 the step size τ is calculated in dependence on the physical calibration of the image as defined above. In the present embodiment the step size τ is calculated using the 2D XY calibration of a pixel multiplied by a fixed constant, i.e. (constant) x (pixeiCaiibrationX) x (pixelCalibrationY). The fixed constant can be set to 1024, but alternative good values can also be determined empirically.
In step 302 the image is padded by mirroring at the boundary and enforcing the appropriate boundary conditions. In step 303 an optimum value for K (as found in the edge preserving function defined in equation 2 above) is determined. This optimisation process will be explained in more detail further below.
Once the optimum value of K has been determined for the current image,
Successive Over Relaxation (SOR) is used to apply the geodesic mean curvature flow filter to the input data for that iteration. Note that for the first iteration the input data will be the initial image data as acquired by the microscope. For further iterations the input data will be the output of the previous iteration. In step 305 a determination is made as to whether the optimum value of K has stabilised. That is to say it is determined whether the optimum value for K for the current image is equal (with in a predetermined tolerance) to the respective optimum value of K for the previous iteration. Thus it will be seen that at least two iterations must be performed, and typically it will be many more than this.
If it is determined that K has stabilised then the filtering process is complete and at step 306 the filtered image data is returned for subsequent processing. That is to say it is fed into step 202 as shown in Figure 2. On the other hand if K has not stabilised then the process returns to step 303 where the optimum value of K for the current image data is calculated, and in step 304 Successive Over Relaxation is again used to find the filtered result for the next iteration and another check made as to whether K has now stabilised in step 305, This process continues until K has stabilised. When carrying out steps 303 and 304 the variance σ2 of the Gaussian filter used in the processes is set equal to two times the square of physical calibration of the image as defined above. In the present embodiment σ2 is set equal to 2x (pixelCalibrationX) x (pixelCalibrationY). Figure 4 shows the segmentation process of step 203 of Figure 2 in more detail. The segmentation process has some similarities with the filtering process described immediately above. Note, however that here what we are talking about as the image, is the respective filtered sub-image corresponding to (hopefully) one ceil.
In step 401 the step size τ is calculated using the physical calibration of the image, as is the case for filtering. Further again in step 402 the image is padded by mirroring at the boundary, enforcing boundary conditions. In step 403 the optimum value of K is calculated for the respective sub-image. In step 404 background is subtracted from the sub image to remove poor detail and noise.
In step 405 a seed surface is created to be fed into the geodesic mean curvature flow filter in due course.
In step 406 a Gauss filtered copy of the sub-image is obtained and in step 407 the conduction coefficients are calculated with the optimised K. in effect this means determining the value of the function g(s) in respect of the initial filtered sub-image data. Note that as mentioned above in the introduction, when the segmentation process is carried out, the value of g(s) i.e. the conduction image, is the same for all iterations of the geodesic mean curvature flow filter. Thus this calculation of the conduction coefficients (step 407) needs to be carried out only once for each sub- image irrespective of the number of iterations later.
In step 408 the seed surface is used to fill in any holes in the conduction image. In practice this means that if there are, for example, any breaks in the lines in the conduction image which appear to correspond to the cell boundary as indicated by the seed surface, these may be set to a low value in the conduction image and correspondingly if there are any low value points in the conduction image away from the seed surface, for example near the centre of the cell, then these may be set to a high value since they can be determined as spurious features.
In step 409 Successive Over Relaxation is used to operate on the seed surface to apply the geodesic mean curvature flow filter to segment the image. In the present embodiment this iterative process is carried out for a predetermined number of iterations and in step 410 it is determined whether the number of iterations has been completed. If not, the process returns to step 409 for application of another iteration. On the other hand when the specified number of iterations has been completed, the segmented sub-image may be returned in step 411 (that is fed into step 204 of the process of Figure 2) where the complete final image may be output to the user.
Note that in alternatives there may be a variable number of iterations at this stage. That is to say the geodesic mean curvature flow filter may be applied a differing number of times based on a determination made by the system. Thus, for example, the output data from one iteration and the previous iteration may be compared and once the difference between subsequent iterations fails below some threshold level iterating may be ceased. As mentioned above, both the filtering process in step 201 and the segmentation process in step 203 involve calculating an optimum value of K for use in the edge preserving function g(s), defined above in equation (1 ). In the case of the filtering step this optimisation is carried out for each iteration, as explained in relation to Figure 3, whereas in the case of the segmentation process this optimisation step is carried out once (for each sub-image) for use in all of the iterations in the segmentation process. Having said this, the optimisation process for K is the same in both cases with just the input data varying.
In the case of the filtering process, the input data into the optimisation process will be the initial data in the first iteration and then the output of the previous iteration in each subsequent iteration. In the case of the segmentation process the input will be the filtered data which is reached at the end of the filtering process 201 , as split into sub images in step 202, Thus as alluded to above, in the case of segmentation, the optimum value of K needs to be calculated for each sub image, The optimisation process will now be described in more detail with reference to Figure 5,
In step 501 a Gauss filtered copy of the input data is obtained. It will be noted that this constitutes the input data |VG0 * u| for the edge preserving function g, as shown in equation 1 above.
K is initially set to ½ and in step 502 the conduction image for the current K is calculated. That is to the value of the function g(s) for each image data element (voxel/pixel) in the data is calculated.
Next in step 503 the mean (x) of the values of the conduction image data is calculated.
In step 504 the sum (y) of the conduction values which are greater than the conduction image mean (x) is calculated. These elements in the image represent part of the image that should be changed by the filter i.e. noise. Due to the nature of the function g(s) the value y should decrease as K increases.
In step 505 the sum (z) of conduction values which are smaller than the mean (x) is calculated. The corresponding image data elements represent things in the image which we want to keep (details/edges) and the value z should increase as K increases.
In step 506 a metric (m) is calculated which is equal to z-y (that is the sum of the value of the conduction values which are smaller than the mean minus the sum of the values of the conduction values which are greater than the mean).
Due to the nature of the function g(s), the value of m will vary with K and will have a minimum. It has been determined by the applicants that using the value of K which gives this minimum value of m is an optimum value of K to choose in the image processing processes. Thus in step 507 it is determined whether the minimum m value has been found, if so this completes the optimisation process and the value of K that generated this minimum value of m is output in step 508. As will be appreciated, this optimum value of K can then be used in the filtering step concerned or the segmentation processes as appropriate.
On the other hand, if in step 507 it is determined that the value of minimum m has not been reached, then in step 509 the value of K is doubled and the process returns to step 502 where the conduction image for the new value of K is calculated. The process is repeated to find the corresponding metric m for that value of K until the minimum value of m is identified.
Note that the range of values that g(s) can take is 0 to 1 where 0 will be the value where a very strong edge/feature is found and 1 will be the value where the data in that region is featureless. This function then operates as an edge preserving function since the remainder of the filter will have no effect where the value of g is 0 and will have greatest effect where the value of g is 1 . it should also be noted that different forms of the function g could be used where they have a similar property. That is a property where the value of the function is small and tending towards 0 for prominent features in the image data and tending towards some maximum value where the image data represents noise or background.
Without an optimisation process such as described above in relation to Figure 5, it is necessary to set K to some constant value for processing all images which will lead to sub-optimal results or have human intervention to pick a value of K in each instance.
Note that with an optimised value of K for a given set of data there will tend to be a good spread of values of g(s) for the image data - with edges close to 0 and noise close to 1 . if K is not optimised this will tend to be skewed. K might be too small so all valves of g(s) are close to 1 , or too large and all valves close to 0. The optimisation tries to make the spread fill the full range 0-1 so that we get fast changes per iteration.
As mentioned above, the segmentation process comprises a step 404 of subtracting background from the image to remove poor detail and noise. This process is described below in more detail with reference to Figure 6. This process starts in step 801 with calculating the conduction image with the optimum value of K determined in step 403. In sub-process 602 each image data element
(pixel/voxel) in the conduction image is considered, and if the conduction pixel/voxel is greater than half, then the value of the corresponding pixel/voxel in the real filtered image is added to a background sum (y) and plus 1 is added to y count. On the other hand, if the value of the conduction pixel is smaller than a half then the corresponding actual pixel/voxel in the real filtered image is added to a feature (z) and plus 1 is added to z count.
Once this process has been completed, in step 603 a value for the feature mean (z/z count) is calculated. In step 604 any pixel in the real filtered image data, as can be termed the source, which has a value which is smaller than the feature average (z/z count) is set to zero. Thus only pixels in the real filtered image data which are greater than the average are preserved. This has the effect of suppressing the background. Once this process is complete then in step 605 the cleaned image is returned.
As mentioned above in step 405 of the segmentation process of Figure 4, a seed surface is created for use as an input into the main segmentation step using the geodesic mean curvature flow filter in step 409. The process of creating the seed surface is described with more detail below, with reference to Figure 7.
In step 701 the conduction image with the optimal value of k is calculated or read out of storage if previously calculated as is the case in the present embodiment.
In step 702 pixels/voxels in the conduction image with a value greater than half are identified as edge features. As noted above, the range of possible values for each pixel/voxel in the conduction image is 0 to 1 due to the behaviour of the function g.
In the present embodiment we are considering an example where the image data is 3D image data. Thus it is appropriate to generate a 3D seed surface. However in alternatives it might be necessary or appropriate to generate a 2D seed surface If that is the case then the next step described in 703 can be omitted.
Here we are going to consider that there are three orthogonal axes x, y and z. Thus in step 703 a centre of the cell being considered in the appropriate sub image is located in z by scanning from a seed point in both directions looking for front and back edges. The initial seed point may be a seed point indicated by a user or indicated by image data acquired on the second channel, as mentioned above. Thus the seed point, for example, might be the location of the nucleus in the cell. However the initial seed point may be altered in step 703. The objective is to find the maximum length of axis in the z direction between identified front and back edges. Once this axis is located then its mid-point is considered the centre of the cell in the z direction.
Once this point has been located then in step 704 the data is scanned in intervals through 360° around the z axis from the z centre to locate a longest major axis in the x-y plane. Again the position of the z axis around which the 360° scanning takes place can be moved in the x-y plane until the best orientation and length of a longest major axis can be found. Next in step 705, a longest minor axis is identified. This is carried out by scanning orthogonal to the z axis and major axis at different points along the major axis again looking for the longest spacing between conduction image values identified as feature edges in step 702. In step 708 the orientation and lengths of the z axis, longest major axis, and minor axis are used to draw an ellipsoid representing an initial seed surface, in step 707 the ellipsoid is multiplied by the conduction image to inherent edge shapes. As noted above, features in the conduction image are zero and thus this multiplication serves to cut out portions of the seed surface which extend into and beyond the edge features.
We are looking for a continuous and single seed surface and therefore any holes in the seed surface may be filled and any spurious seed surfaces outside the main surface may be identified and discarded in step 708. In step 709 the seed surface is returned to the main process. Ιί will be noted that the filtering and segmentation processes as well as the sub- process of those processes described above may ail be used together to particularly good effect. However, in other embodiments only some of the processes or sub-processes may be used and useful benefits still realised. Thus for example generation of the seed surface might be carried out without the particular filtering described above and/or without the optimisation of K or vice versa and so on.
Figures 8 to 15 are images showing an example of the application of the above described processes/use of the above described apparatus.
Figure 8 shows an initial image which shows two cells joined together at a boundary. Although not clearly visible in the image, this is a two channel image with a membrane stain image showing up most clearly as the light colour boundary of the ceils and a dark nucleus stain image which can just about be made out in the centre of the cells.
Figure 9 shows the image of Figure 8 after the effect of filtering step 201 of Figure 2 and the splitting into sub-images step 202 of Figure 2 as well as after the background subtraction/cleaning process of step 404 in the segmentation process.
Figure 10 shows the same portion of the image shown in Figure 9, but
represented as a conduction image. Figure 11 schematically shows part of the scanning processes carried out in the generation of a seed surface in steps 704 and 705 of the process shown in Figure 7. At this stage the centre in z has already been determined so we are just looking at the x, y plane and the diamond shows the identified centre in the x, y plane. The lines represent the 360° scanning around the centre diamond to identify the longest major axis and orthogonal to this, scans at different point along this longest major axis to identify a longest minor axis.
Note of course, that in this process it is not essential that the 100% accurately determined longest z axis, minor axis or major axis is found. What we are looking to do is achieve a good approximation of each of these to provide a useful starting point for a seed surface. Thus what is determined is a parameter taken as a longest z axis, a parameter taken as a longest major axis and a parameter taken as a longest minor axis, and these are used for the subsequent calculations. It is not intended that these are necessarily actually the longest of any of these axis, they are just a best identified value taken to be that for the purposes of generating the seed surface.
Figure 12 shows an ellipsoid calculated using the longest z axis, the longest major axis and the longest minor axis calculated in accordance with steps 704, 705 and 706.
Figure 13 on the other hand shows the result of step 707 where the ellipsoid seed surface shown in Figure 12 has been multiplied by the conduction image. This has a cookie cutter type effect on the surface and cuts out spurious parts of the surface which are not needed. The image shown in Figure 13 also represents the seed surface after any holes in the seed surface and seed surface outside the main surface have been discarded although this is not immediately apparent in the image itself.
Figure 14 represents the result of the segmentation process as implied to the seed surface. That is to say after the seed surface has been operated on by the geodesic mean curvature flow filter in step 409 for the specified number of iterations.
Finally, Figure 15 shows the final output image which is the original image as shown in Figure 8 and added to this, the segmented image edge as found by the filtering and segmentation process. The cell edge c is shown as a dark solid line overlaid on the original image.
As noted above, the present embodiment of the invention makes use of the geodesic mean curvature low filter for the filtering process and the segmentation process. However similar results could be achieved using other non-linear diffusion filters such as those mentioned in the introduction. Furthermore the optimisation process described above in relation to Figure 5 could be used in such cases as could the process for determining a seed surface and cleaning the image and so on. The invention may be embodied in a method of processing image data, a computer program comprising code portions arranged to carry out the method of image processing, a computer programed with such a program or a physical machine readable data carrier carrying such a computer program. Furthermore the invention may be embodied in an image acquiring and processing apparatus comprising both a microscope or a similar piece of equipment for acquiring image data and a processing unit for processing that image data and also optionally an output device such as a display for displaying the results. The physical machine readable data carrier may, for example, comprise a hard disk, a CD ROM, a DVD, a flash memory drive, or so on.

Claims

Claims
1 . A method for identifying cell features in initial image data representing a sample of organic material comprising the steps of:
filtering the initial image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil features;
wherein using at least one of the first and second nonlinear filters comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter, and the method comprises the step of optimising the scalar parameter by the steps of:
selecting a trial value for the parameter;
and with the parameter set to the trial value,
determining the value of the function (x) for each image data element in a selected set of input image data;
determining the mean value of the function (x) across all of the image data elements in said selected set;
determining a first sum (y) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is greater than the mean (x);
determining a second sum (z) which is a sum of the values of the function (x) for each image data element in the selected set where the value of the function (x) is less than the mean (x); and
determining as a metric the value of the second sum (z) minus the value of the first sum (y);
repeating the above determining steps a plurality of times with respective different trial values of the parameter;
determining as an optimum value of said parameter, the trial value which gives a minimum value for the metric;
and using said optimum value for said parameter when using the respective nonlinear diffusion filter.
2. A method according to claim 1 in which using the first nonlinear filter comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter.
3. A method according to claim 2 in which the step of filtering the initial image data comprises iterative application of the first nonlinear diffusion filter.
4. A method according to claim 3 in which the step of optimising the scalar parameter is carried out for each application of the first nonlinear diffusion filter.
5. A method according to claim 4 comprising the step of determining how many iterations should be performed by monitoring the value of the optimised scalar parameter for each iteration and ceasing further iterations once determining that the value of the optimised parameter for successive iterations are equal to one another within a predetermined tolerance.
6. A method according to any preceding claim in which using the second nonlinear filter comprises using an edge preserving function which is operable on each image data element in a respective set of input image data, the strength of which function is dependent on a scalar parameter.
7. A method according to claim 7 in which the step of segmenting the filtered image data comprises iterative application of the second nonlinear diffusion filter.
8. A method according to claim 7 in which the same values of output of the edge preserving function for respective data elements are used in multiple iterations during segmentation.
9. A method according to any preceding claim in which at least one of the first and second nonlinear filters comprises a geodesic mean curvature flow filter.
10. A method according to any preceding claim in which at least one of the first and second nonlinear filters is treated as a discrete filter which is dependent on a time step size defined by the total time for filtering divided by the number of filtering steps, and the method comprises the step of determining the time step size in dependence on the physical calibration of the image, that is the size in the sample which each image data element represents.
11 . A method according to any preceding claim comprising the step of splitting the filtered image data into sections prior to the segmentation step and performing the step of segmentation on each section.
1 2. A method according to any preceding claim in which the step of segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image feature rather than background;
calculating the value of the function for each image data element;
calculating a mean value for those image data elements for which the respective value of the function falls within the first part of the predetermined range;
setting to zero in the filtered image data the value of any image data element which is less than said mean value.
1 3. A method according to any preceding claim in which the step of segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image edge feature;
calculating the value of the function for each image data element in the filtered image data and identifying those image data elements representing an image edge feature to generate a processed image data set;
scanning, in intervals through 360 degrees around a selected point in a selected plane, the processed image data set to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest major axis for the seed surface; having identified the longest major axis, scanning in said plane and orthogonally to the longest major axis at points along its length to identify respective ceil edges and based on a longest determined spacing between identified cell edges, identify a longest minor axis for the seed surface;
generating one of an ellipse and an ellipsoid to represent the seed surface using the longest major axis and the longest minor axis.
14. A method for identifying cell boundaries in image data representing a sample of organic material comprising the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified cell boundaries;
wherein segmenting the filtered image data comprises a step of, before using the second nonlinear diffusion filter, calculating a seed surface for a cell for input into the second nonlinear diffusion filter, which step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image data and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image edge feature;
calculating the value of the function for each image data element in the filtered image data and identifying those image data elements representing an image edge feature to generate a processed image data set;
scanning, in intervals through 360 degrees around a selected point in a selected plane, the processed image data set to identify respective cell edges and based on a longest determined spacing between identified cell edges, identify a longest major axis for the seed surface;
having identified the longest major axis, scanning in said plane and orthogonally to the longest major axis at points along its length to identify respective ceil edges and based on a longest determined spacing between identified cell edges, identify a longest minor axis for the seed surface;
generating one of an ellipse and an ellipsoid to represent the seed surface using the longest major axis and the longest minor axis.
15. A method according to claim 13 or claim 14 in which the image data is 3D image data, and the method comprises determining the selected point and the selected plane by selecting an initial seed point in a ceil and scanning in two directions in a first dimension away from the initial seed point the processed image data set to identify respective cell edges, repeating the scanning process for other initial seed points and based on a longest determined spacing between identified ceil edges identifying a first dimension axis for the seed surface, then choosing the selected point to be a midpoint of the first dimension axis of the ceil and choosing the selected plane to be that plane which is orthogonal to the first dimension axis and contains the selected point, and
the step of generating one of an ellipse and an ellipsoid to represent the seed surface comprises generating an ellipsoid using the first dimension axis, the longest major axis and the longest minor axis,
16. A method according to any one of claims 13, 14 and 15 in which the method comprises the further step of modifying the seed surface using a conduction image which conduction image comprises the set of values resulting from operating on the filtered image data with the edge preserving function.
17. A method according to claim 16 in which modifying the seed surface comprises inheriting edge shapes from the conduction image.
18. A method for identifying cell features in image data representing a sample of organic material comprising the steps of:
filtering the image data using a first nonlinear diffusion filter;
segmenting the filtered image data using a second nonlinear diffusion filter to provide output image data which indicates identified ceil features;
wherein segmenting the filtered image data comprises a step of cleaning the filtered image data before using the second nonlinear diffusion filter, which cleaning step comprises the steps of:
using an edge preserving function which is operable on each image data element in the filtered image and generates resulting output values which are within a predetermined range;
defining a first part of the predetermined range as being indicative of the respective image data element representing an image feature rather than background;
calculating the value of the function for each image data element;
calculating a mean value for those image data elements for which the respective value of the function falls with the first part of the predetermined range;
setting to zero in the filtered image data the value of any image data element which is less than said mean value.
19. A computer program comprising code portions which when loaded and run on a computer cause the computer to carry out the steps of any one of the methods claimed in claims 1 to 18.
20. A machine readable data carrier carrying a computer program as claimed in claim 19.
21 . A computer arranged under the control of software to carry out the steps of any one of the methods claimed in claims 1 to 18.
22. image processing apparatus comprising a processing unit arranged under the control of software to carry out the steps of any one of the methods claimed in claims 1 to 18.
23. image acquiring and processing apparatus comprising a microscope for acquiring image data of a sample, and a processing unit arranged under the control of software to carry out the steps of any one of the methods claimed in claims 1 to 18 in respect of the acquired image data.
PCT/GB2015/050131 2014-02-27 2015-01-21 Method for identifying cell features WO2015128600A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1403494.6 2014-02-27
GBGB1403494.6A GB201403494D0 (en) 2014-02-27 2014-02-27 Identifying cell features

Publications (1)

Publication Number Publication Date
WO2015128600A1 true WO2015128600A1 (en) 2015-09-03

Family

ID=50490510

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2015/050131 WO2015128600A1 (en) 2014-02-27 2015-01-21 Method for identifying cell features

Country Status (2)

Country Link
GB (1) GB201403494D0 (en)
WO (1) WO2015128600A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2140424B1 (en) * 2007-03-22 2011-12-14 Harris Corporation Method and apparatus for processing sar images based on a complex anisotropic diffusion filtering algorithm
EP2447911A1 (en) * 2010-10-28 2012-05-02 Kabushiki Kaisha Toshiba Medical image noise reduction based on weighted anisotropic diffusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2140424B1 (en) * 2007-03-22 2011-12-14 Harris Corporation Method and apparatus for processing sar images based on a complex anisotropic diffusion filtering algorithm
EP2447911A1 (en) * 2010-10-28 2012-05-02 Kabushiki Kaisha Toshiba Medical image noise reduction based on weighted anisotropic diffusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MIKULA K ET AL: "Segmentation of 3D cell membrane images by PDE methods and its applications", COMPUTERS IN BIOLOGY AND MEDICINE, NEW YORK, NY, US, vol. 41, no. 6, 24 March 2011 (2011-03-24), pages 326 - 339, XP028219714, ISSN: 0010-4825, [retrieved on 20110401], DOI: 10.1016/J.COMPBIOMED.2011.03.010 *

Also Published As

Publication number Publication date
GB201403494D0 (en) 2014-04-16

Similar Documents

Publication Publication Date Title
JP7022203B2 (en) Removal of artifacts from tissue images
JP7134303B2 (en) Focal Weighted Machine Learning Classifier Error Prediction for Microscopic Slide Images
Stegmaier et al. Real-time three-dimensional cell segmentation in large-scale microscopy data of developing embryos
Sosa et al. Development and application of MIPAR™: a novel software package for two-and three-dimensional microstructural characterization
Pham et al. Domain transformation-based efficient cost aggregation for local stereo matching
US6529612B1 (en) Method for acquiring, storing and analyzing crystal images
US7502499B2 (en) System and method for filtering noise from a medical image
Morales et al. Espina: a tool for the automated segmentation and counting of synapses in large stacks of electron microscopy images
CN111819569A (en) Virtual staining of tissue slice images
Alioscha-Perez et al. A robust actin filaments image analysis framework
Sharma et al. Edge detection using Moore neighborhood
Sadanandan et al. Segmentation and track-analysis in time-lapse imaging of bacteria
Aldea et al. Robust crack detection for unmanned aerial vehicles inspection in an a-contrario decision framework
US20160063721A1 (en) Transformation of 3-D object for object segmentation in 3-D medical image
EP3329458A1 (en) Systems and methods for automated segmentation of individual skeletal bones in 3d anatomical images
CN110288613B (en) Tissue pathology image segmentation method for ultrahigh pixels
Pop et al. Extracting 3D cell parameters from dense tissue environments: application to the development of the mouse heart
WO2018083142A1 (en) Systems and methods for encoding image features of high-resolution digital images of biological specimens
Lee et al. Optimizing image focus for 3D shape recovery through genetic algorithm
Stegmaier New methods to improve large-scale microscopy image analysis with prior knowledge and uncertainty
Knötel et al. Automated segmentation of complex patterns in biological tissues: lessons from stingray tessellated cartilage
WO2015128600A1 (en) Method for identifying cell features
Schubert et al. Efficient computation of greyscale path openings
Singh et al. Non Uniform Background Removal using Morphology based Structuring Element for Particle Analysis
CN105844699B (en) Fluorescence microscope images three-dimensional rebuilding method and system based on compound Regularization Technique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15701397

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15701397

Country of ref document: EP

Kind code of ref document: A1