WO2008024081A1 - Methods, apparatus and computer-readable media for image segmentation - Google Patents

Methods, apparatus and computer-readable media for image segmentation Download PDF

Info

Publication number
WO2008024081A1
WO2008024081A1 PCT/SG2007/000273 SG2007000273W WO2008024081A1 WO 2008024081 A1 WO2008024081 A1 WO 2008024081A1 SG 2007000273 W SG2007000273 W SG 2007000273W WO 2008024081 A1 WO2008024081 A1 WO 2008024081A1
Authority
WO
WIPO (PCT)
Prior art keywords
intensity
range
pixels
original image
value
Prior art date
Application number
PCT/SG2007/000273
Other languages
French (fr)
Inventor
Qingmao Hu
Yu Qiao
Guoyu Qian
Original Assignee
Agency For Science, Technology And Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency For Science, Technology And Research filed Critical Agency For Science, Technology And Research
Publication of WO2008024081A1 publication Critical patent/WO2008024081A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Definitions

  • the present invention relates to methods, apparatus and computer-readable media for the segmentation of images, particularly those images containing multiple regions of pixels in different grayscale ranges.
  • a grayscale image typically exhibits what an observer would consider to ' be an object against a background.
  • image segmentation is typically used to separate the object from the background, thus simplifying and/or changing the representation of the image into something that is more meaningful and easier to analyze.
  • image segmentation techniques can be used to locate tumors and other pathologies, measure tissue volumes, effect computer-guided surgery, facilitate diagnosis and treatment planning, and assist in the study of anatomical structures.
  • Other fields where image segmentation has practical applications include the location of objects (roads, forests, etc.) in satellite images, face recognition, automatic traffic control systems and machine vision.
  • One type of image segmentation process involves computing a threshold grayscale value (hereinafter referred to as a "grayscale threshold” or simply “threshold”) using an algorithm, and then thresholding the image based on the computed threshold.
  • a threshold grayscale value hereinafter referred to as a "grayscale threshold” or simply “threshold”
  • the result is a thresholded image, from which one can evaluate the suitability of the computed threshold by observing how well the object has been separated from the background.
  • a reliable algorithm is needed for computing the threshold in order to achieve consistently superior results.
  • Transition regions are those regions of the image that are geometrically located between the object and the background, and are composed of pixels which have intermediate grayscales values between those of the object and of the background.
  • the first step in a transition-region-based threshold computation algorithm is to locate the transition regions themselves. Once the transition regions have been located, further processing of the pixels within those transition regions ⁇ s performed in order to produce a threshold for thresholding the original image.
  • transition regions are located by maximizing an effective average gradient (the "MEAG” technique).
  • transition regions are located by a local entropy ("LE") technique.
  • a first broad aspect of the present invention seeks to provide a method of processing an original image comprising a plurality of pixels having respective intensity values.
  • the method comprises determining a range of intensity values based on processing of the original image; creating a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; identifying a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and determining, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
  • a second broad aspect of the present invention seeks to provide a computer-readable medium comprising computer-readable program code which, when interpreted by a computing apparatus, causes the computing apparatus to execute a method of processing an original image comprising a plurality of pixels having respective intensity values.
  • the computer-readable program code comprises first computer- readable program code for causing the computing apparatus to determine a range of intensity values based on processing of the original image; second computer-readable program code for causing the computing apparatus to create a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; third computer-readable program code for causing the computing apparatus to identify a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and fourth computer-readable program code for causing the computing apparatus to determine, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
  • a third broad aspect of the present invention seeks to provide an apparatus for processing an original image comprising a plurality of pixels having respective intensity values.
  • the apparatus comprises means for determining a range of intensity values based on processing of the original image; means for creating a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; means for identifying a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and means for determining, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
  • a fourth broad aspect of the present invention seeks to provide a computing apparatus, which comprises an input configured to receive an original image comprising a plurality of pixels having respective intensity values.
  • the computing apparatus further comprises a processing entity configured to determine a range of intensity values based on processing of the original image; create a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; identify a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; determine a threshold value based on the intensity values of the transition pixels; and threshold said original image based on said threshold value.
  • the computing apparatus also comprises an output configured to output the thresholded image.
  • Fig. 1 shows a computer having a processing entity capable of executing computer- readable instructions for performance methods according to various non-limiting embodiments of the present invention
  • Fig. 2 is a flowchart illustrating steps of a method for segmenting an image, in accordance with a specific non-limiting example embodiment of the present invention, including a step of determining a threshold to be used for segmentation;
  • Fig. 3 is a flowchart illustrating various sub-steps corresponding to the aforementioned step of determining a threshold to be used for segmentation, in accordance with a specific non-limiting example embodiment of the present invention.
  • Non-limiting embodiments of the present invention provide methods of processing an image comprising a plurality of picture elements (pixels). These methods may be performed, at least in part, by a computing apparatus such as a computer 100 shown in Fig. 1.
  • the computer 100 has a processing entity 102 communicatively coupled to a first memory 104, a second memory 106, an input 108 and an output 110.
  • the processing entity 102 may include one or more processors for processing computer- executable instructions and data. It will be understood by those of ordinary skill in the art that the computer 100 may also include other components not shown in Fig. 1. Also, it should be appreciated that the computer 100 may communicate with other apparatuses and systems (not shown) over a network (not shown).
  • the first memory 104 can be an electronic storage comprising a computer-readable medium for storing computer-executable instructions and/or data.
  • the first memory 104 is readily accessible by the processing entity 102 at runtime and may include a random access memory (RAM) for storing computer-executable instructions and data at runtime.
  • the second memory 106 can be an electronic storage comprising a computer-readable medium for storing computer-executable instructions and/or data.
  • the second memory 106 may include persistent storage memory for storing computer-executable instructions and data permanently, typically in the form of electronic files.
  • the input 108 may be used to receive input from a user 114.
  • the input 108 may include one or more input devices, examples of which include but are not limited to a keyboard, a mouse, a microphone, an image acquisition apparatus (e.g., a scanner, a camera, an x-ray machine, etc.), a computer-readable medium such as a removable memory 112 as well as any requisite device for accessing such medium.
  • the input devices may be locally or remotely connected to the processing entity 102, either physically or by way of a communication connection.
  • the output 110 may include one or more output devices, which may include a display device, such as a screen/monitor.
  • output devices include, without limitation, a printer, a speaker, as well as a computer-writable medium and any requisite device for writing to such medium.
  • the output devices may be locally or remotely connected to processing entity 102, either physically or by way of a communication connection.
  • the processing entity 102 executes computer-executable instructions stored by one or more of the memories 104, 106, 112
  • the computer 100 can be caused to carry out one or more of the methods described herein.
  • the methods described herein may also be carried out using a hardware device having circuits for performing one or more of the calculations or functions described herein.
  • a high- level non-limiting description of an example method of image segmentation that may be performed by the computer 100 when executing computer-readable instructions stored in one or more of the memories 104, 106, 112 is now provided with reference to the flowchart in Fig. 2, which includes steps 202 through 208.
  • Step 202 An original image F is received.
  • the original image F can be received from an image acquisition device via the input 108 or by reading data from one of the memories 104, 106, 112.
  • the original image F can be the result of exposing a subject's body part (e.g., the brain) to penetrating radiation.
  • the original image F comprises a plurality of pixels p(x,y), where x and y refer to the two-dimensional coordinates of a given pixel within a Cartesian grid.
  • Pixel p(x,y) has an intensity value (or grayscale) denoted ⁇ x,y), which can range from r(0) to r(L-l).
  • r(0) may be zero and L may be a power of two.
  • the aforesaid values and ranges are merely examples, and that the present invention is not particularly limited to images of any particular size or having any particular range of possible grayscales.
  • Step 204 A threshold T(F) is determined for the original image F received at step 202.
  • the threshold T(F) is determined as a function of the grayscales of the various pixels p(x,y) in the original image F.
  • the threshold represents a grayscale that will be used in step 206 below to segregate pixels having a grayscale below the threshold T(F) from those having a grayscale greater than or equal to the threshold T(F). Further detail regarding possible implementations of step 204 will be given later on.
  • Step 206 The original image F is thresholded using the threshold T(F) determined at step 204. That is to say, the grayscales flx,y) are transformed to thresholded Specifically, each of the pixels p(xy) is considered, and if the corresponding grayscale _/(xj/) is below the threshold T(F), then the corresponding thresholded grayscale j*(xy) is set to a default value (such as r(0)), whereas it remains unchanged if it is at or above the threshold T(F).
  • the outcome of step 206 is therefore a thresholded image F* composed of the pixels p(xy) having thresholded grayscales /''(x, ⁇ /).
  • Step 208 The thresholded image F* is output, e.g., to a display device, a printer, a speaker, a computer-writable medium, etc., where further processing of the thresholded image F* may take place.
  • step 204 i.e., the process of determining the threshold T(F)
  • steps 302 through 308 are now provided with reference to the flowchart in Fig. 3, which includes steps 302 through 308.
  • Step 302 The original image F is processed to determine a range of grayscales defined by a lower bound ⁇ 0 and an upper bound ⁇ ⁇ .
  • This range of grayscales will be used to constrain the eventual choice of the threshold T(F).
  • One approach to determine the aforesaid range of grayscales is through "supervision", which can incorporate the use of training samples or approximation based on prior knowledge or visual assessment. A non-exhaustive list and accompanying description of four (4) supervision techniques is provided below.
  • a first supervision technique is based on the object/background proportion as described in Hu Q, Hou Z, Nowinski WL., Supervised range-constrained thresholding, IEEE Transactions on Image Processing 2006; 15(1): 228-240, hereby incorporated by reference herein. Specifically, suppose that it is concluded that the background proportion (i.e., the percentage of pixels of the original image F that are deemed to constitute background) is in the range of (Hf ,H%) .
  • H(i) represents the cumulative frequency of occurrence of all
  • grayscales r(i), r(z-l), ..., r(l), namely H(i) ⁇ h(i) » an( ⁇ ⁇ (0 r(P) represents the frequency of occurrence of grayscale r( ⁇ ) in the original image F.
  • H(L-Y) is equal to unity.
  • a second supervision technique involves estimating the lower bound #o and the upper bound ⁇ 1 directly from the original image F. Specifically, an observer is deemed to know approximately what is considered “object” and what is considered “background". (It can be assumed without loss of generality that the object is brighter than the background.) The observer can pick out any pixel of the background and set its grayscale to # 0) and similarly pick out any pixel of the object and set its grayscale to ⁇ ⁇ . To make the range tighter, the selected background pixel should be as bright as possible and the selected object pixel should be as dark as possible.
  • a third supervision technique involves estimating the lower bound ⁇ Q and the upper bound O 1 through fuzzy c-means clustering (FCM) or other segmentation methods capable to segregating the background from the object.
  • FCM fuzzy c-means clustering
  • the average grayscale of the background can be set to 6 1 O
  • the average grayscale of the object can be set to ⁇ ⁇ .
  • the expression "average” is meant to encompass any of the mean, median or mode.
  • a fourth supervision technique involves deriving ⁇ Q and ⁇ ⁇ when sample images with ground truth are available.
  • a plurality of grayscale statistics (such as mean intensity, minimum intensity, maximum intensity, the intensity at a specified percentile, or the average intensity within a specified percentile) are calculated for a plurality of training images.
  • the optimum grayscale threshold i.e., the threshold that yields the smallest segmentation error
  • the one particular statistic is found for which the ratio of the value of that statistic to the optimum grayscale threshold varies in the narrowest range, for the various training images.
  • ⁇ o (R o - ⁇ R)x r m
  • O 1 (R 1 -SR) X r 1n .
  • step 304 is therefore a modified image F* composed of the pixels p(x,y) having modified grayscales ⁇ (xy).
  • Step 306 The modified image F 1 is processed to identify a subset of the pixels p(x,y) as transition pixels.
  • Transition pixels are pixels belonging to a transition region, which according to certain definitions can be viewed as a region having the following properties:
  • T(LE) determines which pixels of the local entropy image LE correspond to transition pixels.
  • T(LE) is determined as:
  • a is a constant in the range of [0.6, 1), and is empirically taken as 0.7, while LE(x,y) m3X is the maximum local entropy of all the pixels.
  • a may have a different value, or may be constrained to a different range of values, without departing from the scope of the present invention.
  • a may be selected according to any suitable procedure, including automatically or manually.
  • an image of transition pixels ITR having grayscales ITR(x,y) is determined through thresholding of the modified image F* based on T(LE):
  • the effective average (EAG) can be defined as the average gradient of all pixels with a non-zero gradient:
  • TG ⁇ g(x,y)
  • TQ ⁇ q(x,y) , xx,,yy x,y
  • g(x,y) is the magnitude of any suitable gradient operator at pixel p(x,y), including but not limited to the Sobel operator.
  • the Sobel operator uses two 3x3 kernels which are convolved with the modified image F 11 to calculate approximations of two derivatives, one for horizontal changes, and one for vertical.
  • the Pythagorean sum of the two derivatives yields the magnitude of the gradient.
  • Other operators that may be suitable also include Roberts Cross, Prewitt, Canny, Marr-Hildreth, as well as techniques based on zero-crossings of the second-order derivative in the gradient direction.
  • the clip transformation is introduced which is a function of both the pixel position (x,y) and the clip grayscale c as defined below:
  • the effective average gradient of f c (x,y) and f c ⁇ x,y) are a function of the clip grayscale c and are denoted as EAG ⁇ ow (c) and EAG h i gh (c), respectively.
  • Two quantities can be calculated and denoted as ⁇ ow and ⁇ h i gh , respectively, as given below:
  • transition pixels ITR having grayscales ITR(x,y) is determined through thresholding of the modified image i ⁇ based on ⁇ ow and ⁇ hig h as follows:
  • ITR(x,y) ⁇ fR (X ' yl tf ⁇ > ° « ⁇ f R ( x ⁇ ⁇ ⁇ » ⁇ 0, otherwise
  • Step 308 The image of transition pixels ITR is processed to determine the threshold T(F) for thresholding of the original image F. For example, one can take the average grayscale of all non-zero pixels of the image of transition pixels ITR as follows:
  • the threshold T(F) can also be calculated by analyzing the intensity histogram of the image of transition pixels ITR, such as setting the threshold T(F) to be the grayscale that will maximize the between-class grayscale variance of ITR(x,y) over its pixels p(x,y). Specifically, for all non-zero ITR(x,y), find the maximum and minimum grayscales and denote them as r max and r m ⁇ n , respectively. Let r(k) e.
  • each of the pixels of the image of transition pixels ITR is classified as background when its grayscale ITR(x,y) falls within (T nU11 , r(k)), and object otherwise.
  • the probabilities of the background and the object are P 0 and P 1 calculated below:
  • T(F) max ⁇ B 2 (r(k)) ⁇ . r ⁇ k)
  • embodiments of the present invention may be used in applications where one encounters the need or desire to extract an object with multiple regions of different grayscale ranges, and/or to suppress variations in the background.
  • the present method may be therefore suitable in pathology detection such as infarct detection from diffusion weighted imaging (DWI) images for stroke quantification.
  • DWI diffusion weighted imaging
  • the present methods may be suitable for processing other types of images, both medical and non-medical, including CT, MRI, X-ray, ultrasound, radar and natural scenes, to name a few non-limiting possibilities.
  • computer-readable medium refers to any medium accessible by a computing apparatus (such as the computer 100), which can be removable or non-removable, volatile or non- volatile, and may be embodied as any magnetic, optical, or solid state storage device or combination of devices, or as any other medium or combination of media which may encode or otherwise store computer-executable instructions and/or data.

Abstract

A method of processing an original image that comprises a plurality of pixels having respective intensity values. The method includes determining a range of intensity values based on processing of the original image; creating a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; identifying a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and determining a threshold for thresholding of the original image based on the intensity values of the transition pixels.

Description

METHODS, APPARATUS AND COMPUTER-READABLE MEDIA FOR
IMAGE SEGMENTATION
CROSS-EEFERENCE TO RELATED APPLICATION
This application claims the benefit of U.S. provisional application serial no. rø/839,712 to Hu et al., filed on August 24, 2006, the contents of which are incorporated herein by reference.
FIELD OF THE INVENTION
The present invention relates to methods, apparatus and computer-readable media for the segmentation of images, particularly those images containing multiple regions of pixels in different grayscale ranges.
BACKGROUND
A grayscale image typically exhibits what an observer would consider to 'be an object against a background. The process of "image segmentation" is typically used to separate the object from the background, thus simplifying and/or changing the representation of the image into something that is more meaningful and easier to analyze.
Some of the practical applications of image segmentation are found in medical imaging, where image segmentation techniques can be used to locate tumors and other pathologies, measure tissue volumes, effect computer-guided surgery, facilitate diagnosis and treatment planning, and assist in the study of anatomical structures. Other fields where image segmentation has practical applications include the location of objects (roads, forests, etc.) in satellite images, face recognition, automatic traffic control systems and machine vision. One type of image segmentation process involves computing a threshold grayscale value (hereinafter referred to as a "grayscale threshold" or simply "threshold") using an algorithm, and then thresholding the image based on the computed threshold. The result is a thresholded image, from which one can evaluate the suitability of the computed threshold by observing how well the object has been separated from the background. In a scenario where the segmentation process is to be at least partly automated, a reliable algorithm is needed for computing the threshold in order to achieve consistently superior results.
Accordingly, certain techniques for computing a threshold from an original grayscale image have been developed and can be referred to as transition-region-based threshold computation algorithms, since they pay special attention to so-called "transition regions" in the image. Transition regions are those regions of the image that are geometrically located between the object and the background, and are composed of pixels which have intermediate grayscales values between those of the object and of the background.
The first step in a transition-region-based threshold computation algorithm is to locate the transition regions themselves. Once the transition regions have been located, further processing of the pixels within those transition regionsϊs performed in order to produce a threshold for thresholding the original image.
In one prior art method, which is described in Zhang Y, Gerbrands JJ., Transition region determination based tAres/røtøng, Pattern Recognition Letters 1991; 12: 13-23, transition regions are located by maximizing an effective average gradient (the "MEAG" technique). In another prior art method, which is described in Yan C, Sang N, Zhang T., Local entropy-based transition region extraction and thresholding, Pattern Recognition Letters 2003; 24: 2935-2941, transition regions are located by a local entropy ("LE") technique.
However, there are instances where conventional approaches such as those mentioned above may fail to locate the transition regions with sufficient accuracy. This has been found to be the case particularly when there are two or more regions of the object whose pixels occupy different grayscale ranges, or when the pixels in the background occupy a wide range of grayscale values. The result is a computed threshold that leads to a less-than-satisfactory separation between object and background in the thresholded image. For example, the MEAG technique tends to produce a thresholded image containing only the brightest pixels of the original image, while the LE technique tends result in too much of the original background being incorporated into the thresholded image.
Thus, there exists a need in the industry for improved methods, apparatus and computer-readable media for the segmentation of images, particularly those images containing multiple regions of pixels in different grayscale ranges.
SUMMARY OF THE INVENTION
A first broad aspect of the present invention seeks to provide a method of processing an original image comprising a plurality of pixels having respective intensity values. The method comprises determining a range of intensity values based on processing of the original image; creating a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; identifying a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and determining, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
A second broad aspect of the present invention seeks to provide a computer-readable medium comprising computer-readable program code which, when interpreted by a computing apparatus, causes the computing apparatus to execute a method of processing an original image comprising a plurality of pixels having respective intensity values. The computer-readable program code comprises first computer- readable program code for causing the computing apparatus to determine a range of intensity values based on processing of the original image; second computer-readable program code for causing the computing apparatus to create a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; third computer-readable program code for causing the computing apparatus to identify a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and fourth computer-readable program code for causing the computing apparatus to determine, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
A third broad aspect of the present invention seeks to provide an apparatus for processing an original image comprising a plurality of pixels having respective intensity values. The apparatus comprises means for determining a range of intensity values based on processing of the original image; means for creating a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; means for identifying a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and means for determining, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
A fourth broad aspect of the present invention seeks to provide a computing apparatus, which comprises an input configured to receive an original image comprising a plurality of pixels having respective intensity values. The computing apparatus further comprises a processing entity configured to determine a range of intensity values based on processing of the original image; create a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; identify a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; determine a threshold value based on the intensity values of the transition pixels; and threshold said original image based on said threshold value. The computing apparatus also comprises an output configured to output the thresholded image. These and other aspects and features of the present invention will now become apparent to those of ordinary skill in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
In the accompanying drawings:
Fig. 1 shows a computer having a processing entity capable of executing computer- readable instructions for performance methods according to various non-limiting embodiments of the present invention;
Fig. 2 is a flowchart illustrating steps of a method for segmenting an image, in accordance with a specific non-limiting example embodiment of the present invention, including a step of determining a threshold to be used for segmentation; and
Fig. 3 is a flowchart illustrating various sub-steps corresponding to the aforementioned step of determining a threshold to be used for segmentation, in accordance with a specific non-limiting example embodiment of the present invention.
It is to be expressly understood that the description and drawings are only for the purpose of illustration of certain embodiments of the invention and are an aid for understanding. They are not intended to be a definition of the limits of the invention.
DETAILED DESCRIPTION OF NON-LIMITING EMBODIMENTS
Non-limiting embodiments of the present invention provide methods of processing an image comprising a plurality of picture elements (pixels). These methods may be performed, at least in part, by a computing apparatus such as a computer 100 shown in Fig. 1. The computer 100 has a processing entity 102 communicatively coupled to a first memory 104, a second memory 106, an input 108 and an output 110. The processing entity 102 may include one or more processors for processing computer- executable instructions and data. It will be understood by those of ordinary skill in the art that the computer 100 may also include other components not shown in Fig. 1. Also, it should be appreciated that the computer 100 may communicate with other apparatuses and systems (not shown) over a network (not shown).
The first memory 104 can be an electronic storage comprising a computer-readable medium for storing computer-executable instructions and/or data. The first memory 104 is readily accessible by the processing entity 102 at runtime and may include a random access memory (RAM) for storing computer-executable instructions and data at runtime. The second memory 106 can be an electronic storage comprising a computer-readable medium for storing computer-executable instructions and/or data. The second memory 106 may include persistent storage memory for storing computer-executable instructions and data permanently, typically in the form of electronic files.
The input 108 may be used to receive input from a user 114. The input 108 may include one or more input devices, examples of which include but are not limited to a keyboard, a mouse, a microphone, an image acquisition apparatus (e.g., a scanner, a camera, an x-ray machine, etc.), a computer-readable medium such as a removable memory 112 as well as any requisite device for accessing such medium. The input devices may be locally or remotely connected to the processing entity 102, either physically or by way of a communication connection.
The output 110 may include one or more output devices, which may include a display device, such as a screen/monitor. Other examples of output devices include, without limitation, a printer, a speaker, as well as a computer-writable medium and any requisite device for writing to such medium. The output devices may be locally or remotely connected to processing entity 102, either physically or by way of a communication connection.
When the processing entity 102 executes computer-executable instructions stored by one or more of the memories 104, 106, 112, the computer 100 can be caused to carry out one or more of the methods described herein. As can be appreciated, the methods described herein may also be carried out using a hardware device having circuits for performing one or more of the calculations or functions described herein. A high- level non-limiting description of an example method of image segmentation that may be performed by the computer 100 when executing computer-readable instructions stored in one or more of the memories 104, 106, 112 is now provided with reference to the flowchart in Fig. 2, which includes steps 202 through 208.
Step 202: An original image F is received. In certain non-limiting embodiments, the original image F can be received from an image acquisition device via the input 108 or by reading data from one of the memories 104, 106, 112. In a non-limiting application, the original image F can be the result of exposing a subject's body part (e.g., the brain) to penetrating radiation.
The original image F comprises a plurality of pixels p(x,y), where x and y refer to the two-dimensional coordinates of a given pixel within a Cartesian grid. In a 256-by-256 pixel image, the domain of x and y are each [0,255]. Pixel p(x,y) has an intensity value (or grayscale) denoted βx,y), which can range from r(0) to r(L-l). In some embodiments, r(0) may be zero and L may be a power of two. Of course, it should be understood that the aforesaid values and ranges are merely examples, and that the present invention is not particularly limited to images of any particular size or having any particular range of possible grayscales.
Step 204: A threshold T(F) is determined for the original image F received at step 202. The threshold T(F) is determined as a function of the grayscales of the various pixels p(x,y) in the original image F. The threshold represents a grayscale that will be used in step 206 below to segregate pixels having a grayscale below the threshold T(F) from those having a grayscale greater than or equal to the threshold T(F). Further detail regarding possible implementations of step 204 will be given later on.
Step 206: The original image F is thresholded using the threshold T(F) determined at step 204. That is to say, the grayscales flx,y) are transformed to thresholded
Figure imgf000009_0001
Specifically, each of the pixels p(xy) is considered, and if the corresponding grayscale _/(xj/) is below the threshold T(F), then the corresponding thresholded grayscale j*(xy) is set to a default value (such as r(0)), whereas it remains unchanged if it is at or above the threshold T(F). The outcome of step 206 is therefore a thresholded image F* composed of the pixels p(xy) having thresholded grayscales /''(x,}/).
Step 208: The thresholded image F* is output, e.g., to a display device, a printer, a speaker, a computer-writable medium, etc., where further processing of the thresholded image F* may take place.
Further detail regarding possible implementations of step 204, i.e., the process of determining the threshold T(F), is now provided with reference to the flowchart in Fig. 3, which includes steps 302 through 308.
Step 302: The original image F is processed to determine a range of grayscales defined by a lower bound θ0 and an upper bound θ\. This range of grayscales will be used to constrain the eventual choice of the threshold T(F). One approach to determine the aforesaid range of grayscales is through "supervision", which can incorporate the use of training samples or approximation based on prior knowledge or visual assessment. A non-exhaustive list and accompanying description of four (4) supervision techniques is provided below.
A first supervision technique is based on the object/background proportion as described in Hu Q, Hou Z, Nowinski WL., Supervised range-constrained thresholding, IEEE Transactions on Image Processing 2006; 15(1): 228-240, hereby incorporated by reference herein. Specifically, suppose that it is concluded that the background proportion (i.e., the percentage of pixels of the original image F that are deemed to constitute background) is in the range of (Hf ,H%) .
This conclusion can be reached by analyzing the original image F using one of at least three techniques (namely, training of samples, approximation based on prior knowledge, and approximation using visual judgement of the background proportion). Then, based on Hf and
Figure imgf000010_0001
θ\ in the following way:
θ0 = min{/ 1 HXO > Hf } , θx = min{i | H(O ≥
Figure imgf000010_0002
where H(i) represents the cumulative frequency of occurrence of all
grayscales r(i), r(z-l), ..., r(l), namely H(i) = ∑h(i) » an(^ ^(0 r(P) represents the frequency of occurrence of grayscale r(ι) in the original image F. One will of course appreciate that H(L-Y) is equal to unity.
A second supervision technique involves estimating the lower bound #o and the upper bound ^1 directly from the original image F. Specifically, an observer is deemed to know approximately what is considered "object" and what is considered "background". (It can be assumed without loss of generality that the object is brighter than the background.) The observer can pick out any pixel of the background and set its grayscale to #0) and similarly pick out any pixel of the object and set its grayscale to θ\. To make the range tighter, the selected background pixel should be as bright as possible and the selected object pixel should be as dark as possible.
A third supervision technique involves estimating the lower bound ΘQ and the upper bound O1 through fuzzy c-means clustering (FCM) or other segmentation methods capable to segregating the background from the object. The average grayscale of the background can be set to 61 O, and the average grayscale of the object can be set to θ\. The expression "average" is meant to encompass any of the mean, median or mode.
For more information regarding FCM, the reader is directed to: http : //www . elet . polimi . it/upload/matteucc/Clustering/tuto rial_html/cmeans . html.
A fourth supervision technique involves deriving ΘQ and θ\ when sample images with ground truth are available. In this case, in training, a plurality of grayscale statistics (such as mean intensity, minimum intensity, maximum intensity, the intensity at a specified percentile, or the average intensity within a specified percentile) are calculated for a plurality of training images. The optimum grayscale threshold (i.e., the threshold that yields the smallest segmentation error) is also calculated during training by performing segmentation of each training image using various grayscale thresholds. Then, the one particular statistic is found for which the ratio of the value of that statistic to the optimum grayscale threshold varies in the narrowest range, for the various training images. Thus, statistically, extracting the particular grayscale statistic from a new image to be studied after training (such as the original image F) is expected to produce a value that comes "close" to the optimum threshold.
For example, suppose that the statistic whose ratio to the optimum grayscale threshold was found to vary in the narrowest range turns out to be the "maximum intensity" (denoted rm), and let this narrowest range be denoted [i?0, R{\. Then, for a new image to be studied after training (such as the original image F), the upper and lower bounds for the range of grayscales (θo and θ\) can be determined by:
Θ^ Ro * rm , O1 = R1 X rn . However, one can also modify [RQ, RI] to allow for further variability within the new image. Let this variation (which is application dependent) be denoted δR. In this case, the aforesaid narrowest range [Ro, Ri] becomes [Ro-δR, Rι+δR]. Then, for a new image to be studied after training (such as the original image F), the upper and lower bounds for the range of grayscales (ΘQ and θ{) can be determined by:
θo = (Ro -δR)x rm , O1 = (R1 -SR) X r1n .
Step 304: The original image F is modified using the range determined at step 302. That is to say, the grayscales βx,y) are transformed to modified
Figure imgf000012_0001
Specifically, each of the pixels p(x,y) is considered, and if the corresponding grayscale βx,y) is within the range
Figure imgf000012_0002
then the corresponding modified grayscale β(x.y) is unchanged, i.e., β(x,y) =fix,y). However, outside the range [00,^1], a modification is made to the grayscales flxy). In particular, if the corresponding grayscale f[x,y) is below ΘQ, then the corresponding modified grayscale β(x,y) is set to ΘQ, while if the corresponding grayscale βx,y) is above θι, then the corresponding modified grayscale βζxy) is set to θ\. The outcome of step 304 is therefore a modified image F* composed of the pixels p(x,y) having modified grayscales β(xy).
Step 306: The modified image F1 is processed to identify a subset of the pixels p(x,y) as transition pixels. Transition pixels are pixels belonging to a transition region, which according to certain definitions can be viewed as a region having the following properties:
- forms the boundary between the object and the background, and circumscribes the object;
- has a nonzero area; and
- exhibits frequent grayscale changes. Various image processing techniques can be used to identify the transition regions of the modified image F^. These include "local entropy" and "maximized effective average gradient" techniques, both of which are described herein below. Other techniques can of course be used without departing from the scope of the present invention.
A. Local Entropy
Firstly, the "local entropy image" LE of the modified image F^ is calculated as:
HL-I)
LE(x,y) = - YJPJ logPj ,
Ti where P1 = , m is the number of pixels with grayscale r(/)
Mk xNk in a "neighborhood" of pixel p(x,y) defined by a rectangle centered at pixel p(x,y) and of size Mk times Nk. Although satisfactory results tend to be achieved with rectangle sizes in range of 7x7 to 15x15, it is within the scope of the present invention to use rectangle sizes outside this range and having square or non-square dimensions.
Secondly, determine a threshold T(LE) that will determine which pixels of the local entropy image LE correspond to transition pixels. Here T(LE) is determined as:
ax LE(x,y)max,
where a is a constant in the range of [0.6, 1), and is empirically taken as 0.7, while LE(x,y)m3X is the maximum local entropy of all the pixels. Of course, a may have a different value, or may be constrained to a different range of values, without departing from the scope of the present invention. Also, a may be selected according to any suitable procedure, including automatically or manually.
Thirdly, an image of transition pixels ITR having grayscales ITR(x,y) is determined through thresholding of the modified image F* based on T(LE):
Figure imgf000014_0001
Maximized Effective Average Gradient
The effective average (EAG) can be defined as the average gradient of all pixels with a non-zero gradient:
EAG = ™-, TQ
TG = ∑g(x,y) , TQ = ∑q(x,y) , xx,,yy x,y
Figure imgf000014_0002
where g(x,y) is the magnitude of any suitable gradient operator at pixel p(x,y), including but not limited to the Sobel operator. The Sobel operator uses two 3x3 kernels which are convolved with the modified image F11 to calculate approximations of two derivatives, one for horizontal changes, and one for vertical. The Pythagorean sum of the two derivatives yields the magnitude of the gradient. Other operators that may be suitable also include Roberts Cross, Prewitt, Canny, Marr-Hildreth, as well as techniques based on zero-crossings of the second-order derivative in the gradient direction. To reduce the influence of noise, the clip transformation is introduced which is a function of both the pixel position (x,y) and the clip grayscale c as defined below:
Figure imgf000015_0001
The effective average gradient of fc(x,y) and fc{x,y) are a function of the clip grayscale c and are denoted as EAGιow(c) and EAGhigh(c), respectively. Two quantities can be calculated and denoted as θιow and θhigh, respectively, as given below:
θl0W = maχ{EAGl0W(c)} c θMgh = msx{EAGltigh(c)}
Finally, the image of transition pixels ITR having grayscales ITR(x,y) is determined through thresholding of the modified image i^ based on θιow and θhigh as follows:
ITR(x,y) = \ fR (X'yl tf θ>°« ≤ fR(x^ ≤ θ»≠ 0, otherwise
Step 308: The image of transition pixels ITR is processed to determine the threshold T(F) for thresholding of the original image F. For example, one can take the average grayscale of all non-zero pixels of the image of transition pixels ITR as follows:
∑ITR(x,y)
H ) ~ ∑i
V(x,y),ITR(x,y)>0 One will appreciate that the threshold T(F) can also be calculated by analyzing the intensity histogram of the image of transition pixels ITR, such as setting the threshold T(F) to be the grayscale that will maximize the between-class grayscale variance of ITR(x,y) over its pixels p(x,y). Specifically, for all non-zero ITR(x,y), find the maximum and minimum grayscales and denote them as rmax and rmιn, respectively. Let r(k) e. (rτώΛ,r) , then each of the pixels of the image of transition pixels ITR is classified as background when its grayscale ITR(x,y) falls within (TnU11, r(k)), and object otherwise. The probabilities of the background and the object are P0 and P1 calculated below:
r(*-l)
P0Mk)) = ∑h(i), P1(K^) = I -P0(K*)) .
The mean grayscales of the background, object, and all are given by:
r(fc-l) m,(r(k)) = ∑r(i) * h(i) ÷P0(r(k)),
mι(r(k)) = ∑r(i) x h(i)÷P1(r(k)), r(fc)
m = ∑r(i) x h(i) .
The between-class variance
Figure imgf000016_0001
by:
σl(r(k)) ^ P0(r(k))(mMk)) -mf + PMk))(Mr(k)) -mf .
The threshold for use in the thresholding operation at step 206 can then be the one which maximizes the between-class variance, in which case it is computed as T(F) = max{σB 2(r(k))} . r{k) Based on the above description, it will be noted that the effect of having modified the original image F at step 304, the intensity variations of any pixels that do not have a grayscale within the range
Figure imgf000017_0001
are effectively smoothed out. This means that the identification of transition pixels at step 306 is not biased by regions of pixels that would have given the semblance of transition regions, were it not for the a priori knowledge that their grayscales needed to be constrained to a certain range in order to qualify as transition regions. As a result, regions where merely the background — or merely the object - tends to vary in grayscale will not qualify as transition regions under step 306, and therefore ultimately will not end up influencing selection of the threshold T(F) at step 308.
Thus, embodiments of the present invention may be used in applications where one encounters the need or desire to extract an object with multiple regions of different grayscale ranges, and/or to suppress variations in the background. The present method may be therefore suitable in pathology detection such as infarct detection from diffusion weighted imaging (DWI) images for stroke quantification. Of course, the present methods may be suitable for processing other types of images, both medical and non-medical, including CT, MRI, X-ray, ultrasound, radar and natural scenes, to name a few non-limiting possibilities.
Persons skilled in the art should appreciate that the term "computer-readable medium" as used herein refers to any medium accessible by a computing apparatus (such as the computer 100), which can be removable or non-removable, volatile or non- volatile, and may be embodied as any magnetic, optical, or solid state storage device or combination of devices, or as any other medium or combination of media which may encode or otherwise store computer-executable instructions and/or data.
While specific embodiments of the present invention have been described and illustrated, it will be apparent to those skilled in the art that numerous modifications and variations can be made without departing from the scope of the invention as defined in the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method of processing an original image comprising a plurality of pixels having respective intensity values, said method comprising:
- determining a range of intensity values based on processing of the original image;
- creating a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range; identifying a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and
- determining, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
2. The method defined in claim 1, wherein determining said range of intensity values comprises: determining an object/background proportion range for the original image, the object/background proportion range being defined by a lower bound and an upper bound; and
- setting said lower bound of the range of intensity values as the lowest intensity value for which a cumulative frequency of occurrence of all intensity values at and below that intensity value is greater than the lower bound of the objec^ackground proportion range; and
- setting said upper bound of the range of intensity values as the lowest intensity value for which the cumulative frequency of occurrence of all intensity values at and below that intensity value is greater than the upper bound of the object/background proportion range.
3. The method defined in claim 2, wherein determining the object/background proportion range for the original image is achieved through training.
4. The method defined in claim 2, wherein determining the object/background proportion range for the original image is achieved through approximation based on prior knowledge.
5. The method defined in claim 2, wherein determining the object/background proportion range for the original image is achieved through approximation using visual judgement.
6. The method defined in claim 1, wherein determining said range of intensity values comprises:
- selecting an object pixel in the original image, said object pixel having a first intensity; selecting a background pixel in the original image, said background pixel having a second intensity different from the first intensity; setting said lower bound of the range of intensity values as the lower of the first and second intensities; and
- setting said upper bound of the range of intensity values as the higher of the first and second intensities.
7. The method defined in claim 1, wherein determining said range of intensity values comprises: estimating an intensity range for a background portion of the original image;
- estimating an intensity range for an object portion of the original image; setting said lower bound of the range of intensity values as an average of the intensity range for the background portion;
- setting said upper bound of the range of intensity values as an average of the intensity range for the object portion.
8. The method defined in claim 7, wherein estimating the intensity range for the background portion and estimating the intensity range for the object portion comprises applying a fuzzy c-means clustering algorithm to the original image.
9. The method defined in claim 8, wherein determining said range of intensity values comprises:
- performing segmentation on a plurality of training images with a plurality of training thresholds;
- determining a particular training threshold that minimizes error during said segmentation of the training images, such training threshold being referred to as an optimum threshold; - for each of a plurality of intensity statistics, and over the plurality of training images, determining a range to which are confined variations of a ratio of the value of that statistic to the optimum threshold;
- identifying which of the intensity statistics yields the narrowest said range, over the plurality of training images;
- setting said lower bound of the range of intensity values to a function of (I) a lower bound of the narrowest said range and (II) the value of the identified intensity statistic; setting said upper bound of the range of intensity values as a function of (I) an upper bound of the minimum variation and (II) the value of the identified intensity statistic.
10. The method defined in claim 9, wherein the plurality of intensity statistics includes a mean intensity, a minimum intensity, a maximum intensity, an intensity at at least one specified percentile, and a mean intensity within at least one specified percentile.
11. The method defined in claim 9, wherein said function of (I) the lower bound of the narrowest said range and (II) the value of the identified intensity statistic is the product.
12. The method defined in claim 11, wherein said function of (I) the upper bound of the narrowest said range and (II) the value of the identified intensity statistic is the product.
13. The method defined in claim 9, wherein said function of (I) the lower bound of the narrowest said range and (II) the value of the identified intensity statistic is the product of the value of the identified intensity statistic and a downwardly adjusted value of the lower bound of the narrowest said range.
14. The method defined in claim 13, wherein said function of (I) the upper bound of the narrowest said range and (II) the value of the identified intensity statistic is the product of the value of the identified intensity statistic and an upwardly adjusted value of the upper bound of the narrowest said range.
15. The method defined in claim 1, wherein changing the intensity value of those pixels in the original image having an intensity value outside said range to either said upper or said lower bound for said range comprises changing the intensity value of those pixels in the original image having an intensity value less than said lower bound to said lower bound.
16. The method defined in claim 15, wherein changing the intensity value of those pixels in the original image having an intensity value outside said range to either said upper or said lower bound for said range comprises changing the intensity value of those pixels in the original image having an intensity value greater than said upper bound to said upper bound.
17. The method defined in claim I5 wherein identifying the subset of the pixels in the modified image as transition pixels comprises applying a local entropy algorithm to the modified image.
18. The method defined in claim 1, wherein identifying the subset of the pixels in the modified image as transition pixels comprises applying a maximum effective average gradient algorithm to the modified image.
19. The method defined in claim 1, wherein determining the threshold for thresholding of the original image comprises determining an average intensity value of the transition pixels.
20. The method defined in claim 1, wherein determining the threshold for thresholding of the original image comprises determining a particular intensity value that maximizes between-class variance of the transition pixels.
21. The method defined in claim 1, wherein determining the threshold for thresholding of the original image comprises:
- choosing an intensity value between a minimum intensity value exhibited by the transition pixels and a maximum intensity value exhibited by the transition pixels: determining a first likelihood that the intensity value of a given one of the transition pixels is smaller than the chosen intensity value; determining a second likelihood complementary to the first likelihood; computing a first mean intensity for the transition pixels whose intensity values are below the chosen intensity value; computing a second mean intensity for the transition pixels whose intensity values are not below the chosen intensity value; and
- computing an overall mean intensity for all the transition pixels; computing a between-class variance for the chosen intensity value based on the first likelihood, the second likelihood, the first mean intensity, the second mean intensity and the overall mean intensity; repeating the above steps for other intensity values between the minimum intensity value exhibited by the transition pixels and the maximum intensity value exhibited by the transition pixels; selecting as said threshold the one chosen intensity value that maximizes said between-class variance.
22. The method defined in claim 1, further comprising thresholding the original image based on the threshold.
23. The method defined in claim 22, wherein thresholidng the original image comprises changing the intensity value of those pixels in the original image having an intensity value less than said threshold to a default value.
24. The method defined in claim 23, wherein the default value is a minimum intensity value.
25. The method defined in claim 1, wherein the original image is the result of exposing a subject's body part to penetrating radiation.
26. The method defined in claim 25, wherein the body part comprises the brain.
27. A computer-readable medium comprising computer-readable program code which, when interpreted by a computing apparatus, causes the computing apparatus to execute a method of processing an original image comprising a plurality of pixels having respective intensity values, the computer-readable program code comprising: first computer-readable program code for causing the computing apparatus to determine a range of intensity values based on processing of the original image; second computer-readable program code for causing the computing apparatus to create a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range;
- third computer-readable program code for causing the computing apparatus to identify a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and fourth computer-readable program code for causing the computing apparatus to determine, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
28. Apparatus for processing an original image comprising a plurality of pixels having respective intensity values, said apparatus comprising:
- means for determining a range of intensity values based on processing of the original image;
- means for creating a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range;
- means for identifying a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image; and
- means for determining, based on the intensity values of the transition pixels, a threshold for thresholding of the original image.
29. A computing apparatus, comprising:
- an input configured to receive an original image comprising a plurality of pixels having respective intensity values; a processing entity configured to: determine a range of intensity values based on processing of the original image; create a modified image by changing the intensity value of those pixels in the original image having an intensity value outside said range to either an upper or a lower bound for said range;
- identify a subset of the pixels in the modified image as transition pixels based on the intensity values of the pixels in the modified image;
- determine a threshold value based on the intensity values of the transition pixels; and
- threshold said original image based on said threshold value;
- an output configured to output the thresholded image.
PCT/SG2007/000273 2006-08-24 2007-08-24 Methods, apparatus and computer-readable media for image segmentation WO2008024081A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US83971206P 2006-08-24 2006-08-24
US60/839,712 2006-08-24

Publications (1)

Publication Number Publication Date
WO2008024081A1 true WO2008024081A1 (en) 2008-02-28

Family

ID=39107076

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2007/000273 WO2008024081A1 (en) 2006-08-24 2007-08-24 Methods, apparatus and computer-readable media for image segmentation

Country Status (1)

Country Link
WO (1) WO2008024081A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544009A (en) * 2019-07-26 2019-12-06 中国人民解放军海军航空大学青岛校区 Aviation organic coating aging damage quantitative evaluation method based on digital image processing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6335980B1 (en) * 1997-07-25 2002-01-01 Arch Development Corporation Method and system for the segmentation of lung regions in lateral chest radiographs
US20020186874A1 (en) * 1994-09-07 2002-12-12 Jeffrey H. Price Method and means for image segmentation in fluorescence scanning cytometry
US20040208367A1 (en) * 2003-04-21 2004-10-21 Yong-Shik Shin Method for finding optimal threshold for image segmentation
WO2005057493A1 (en) * 2003-12-10 2005-06-23 Agency For Science, Technology And Research Methods and apparatus for binarising images
US20060050958A1 (en) * 2004-09-09 2006-03-09 Kazunori Okada System and method for volumetric tumor segmentation using joint space-intensity likelihood ratio test
US20060245649A1 (en) * 2005-05-02 2006-11-02 Pixart Imaging Inc. Method and system for recognizing objects in an image based on characteristics of the objects
US20060245652A1 (en) * 2005-05-02 2006-11-02 Pixart Imaging Inc. Method for recognizing objects in an image without recording the image in its entirety

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020186874A1 (en) * 1994-09-07 2002-12-12 Jeffrey H. Price Method and means for image segmentation in fluorescence scanning cytometry
US6335980B1 (en) * 1997-07-25 2002-01-01 Arch Development Corporation Method and system for the segmentation of lung regions in lateral chest radiographs
US20040208367A1 (en) * 2003-04-21 2004-10-21 Yong-Shik Shin Method for finding optimal threshold for image segmentation
WO2005057493A1 (en) * 2003-12-10 2005-06-23 Agency For Science, Technology And Research Methods and apparatus for binarising images
US20060050958A1 (en) * 2004-09-09 2006-03-09 Kazunori Okada System and method for volumetric tumor segmentation using joint space-intensity likelihood ratio test
US20060245649A1 (en) * 2005-05-02 2006-11-02 Pixart Imaging Inc. Method and system for recognizing objects in an image based on characteristics of the objects
US20060245652A1 (en) * 2005-05-02 2006-11-02 Pixart Imaging Inc. Method for recognizing objects in an image without recording the image in its entirety

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544009A (en) * 2019-07-26 2019-12-06 中国人民解放军海军航空大学青岛校区 Aviation organic coating aging damage quantitative evaluation method based on digital image processing
CN110544009B (en) * 2019-07-26 2022-12-09 中国人民解放军海军航空大学青岛校区 Aviation organic coating aging damage quantitative evaluation method based on digital image processing

Similar Documents

Publication Publication Date Title
Coupé et al. Fast non local means denoising for 3D MR images
Lopez-Molina et al. Multiscale edge detection based on Gaussian smoothing and edge tracking
US6310967B1 (en) Normal and abnormal tissue identification system and method for medical images such as digital mammograms
CN110678903B (en) System and method for analysis of ectopic ossification in 3D images
US7519207B2 (en) Detection and correction method for radiograph orientation
Yang Multimodal medical image fusion through a new DWT based technique
US20090279778A1 (en) Method, a system and a computer program for determining a threshold in an image comprising image values
EP1501048A2 (en) Method of segmenting a radiographic image into diagnostically relevant and diagnostically irrelevant regions
El-Zaart Images thresholding using ISODATA technique with gamma distribution
DE102006017114A1 (en) Refined segmentation of nodes for computer-aided diagnosis
Lecron et al. Descriptive image feature for object detection in medical images
Zhong et al. A semantic no-reference image sharpness metric based on top-down and bottom-up saliency map modeling
Goceri Automatic kidney segmentation using Gaussian mixture model on MRI sequences
Singh et al. Image segmentation for automatic particle identification in electron micrographs based on hidden Markov random field models and expectation maximization
Mukherjee et al. Variability of Cobb angle measurement from digital X-ray image based on different de-noising techniques
US7020343B1 (en) Method and apparatus for enhancing discrete pixel images by analyzing image structure
Qureshi et al. An information based framework for performance evaluation of image enhancement methods
WO2008024081A1 (en) Methods, apparatus and computer-readable media for image segmentation
Sachin et al. Brain tumor detection based on bilateral symmetry information
Sampat et al. Classification of mammographic lesions into BI-RADS shape categories using the beamlet transform
Dharmagunawardhana et al. Quantitative analysis of pulmonary emphysema using isotropic Gaussian Markov random fields
EP2005389B1 (en) Automatic cardiac band detection on breast mri
Brunnström et al. On scale and resolution in active analysis of local image structure
Gering Diagonalized nearest neighbor pattern matching for brain tumor segmentation
Ogul et al. Unsupervised rib delineation in chest radiographs by an integrative approach

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07808906

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07808906

Country of ref document: EP

Kind code of ref document: A1