AU2008215155A1 - An edge detection method and a particle counting method - Google Patents

An edge detection method and a particle counting method Download PDF

Info

Publication number
AU2008215155A1
AU2008215155A1 AU2008215155A AU2008215155A AU2008215155A1 AU 2008215155 A1 AU2008215155 A1 AU 2008215155A1 AU 2008215155 A AU2008215155 A AU 2008215155A AU 2008215155 A AU2008215155 A AU 2008215155A AU 2008215155 A1 AU2008215155 A1 AU 2008215155A1
Authority
AU
Australia
Prior art keywords
edge
pixels
target pixel
array
intensity value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2008215155A
Inventor
Neva Bull
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Newcastle Innovation Ltd
Original Assignee
Newcastle Innovation Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2007900707A external-priority patent/AU2007900707A0/en
Application filed by Newcastle Innovation Ltd filed Critical Newcastle Innovation Ltd
Priority to AU2008215155A priority Critical patent/AU2008215155A1/en
Publication of AU2008215155A1 publication Critical patent/AU2008215155A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Description

WO 2008/098284 PCT/AU2008/000165 1. AN EDGE DETECTION METHOD AND A PARTICLE COUNTING METHOD FIELD OF THE INVENTION 5 An aspect of the present invention relates to image processing for edge detection. Embodiments of this aspect of the invention find application, though not exclusively, in the decomposition of complex waveforms. Another aspect of the present invention relates to the counting of particles. Embodiments of this aspect of the invention find application, though not exclusively, in 10 the medical field for use in the counting of particles in signal images, for example the counting of emboli in signal images generated by the use of a transcranial Doppler machine to analyse a patient's blood flow. BACKGROUND OF THE INVENTION 15 Various computer implemented algorithms for the automated detection of edges within an image are known. However, some of the prior art edge detection algorithms may be excessively complex for some applications. Other prior art edge detection algorithms may be excessively processor intensive and therefore may run slowly on some computing platforms. 20 Transcranial Doppler machines (TCD machines) work by transmitting sound above the auditory range into a patient with enough power output to travel through the cranium and into the brain. Signals are reflected within the patient and detected by a probe. It is possible to analyse a specific depth within the patient by assuming a mean speed of sound in tissue and waiting a corresponding time period before listening to 25 echo's reflected back to the probe. As the transmission frequency is known, Doppler effects are used to calculate the velocity relative to the ultrasound beam of the object within the patient that reflected the sound. This relative velocity is proportional to the difference between the transmitted and received frequencies. The actual velocity of the object within the patient that reflected the sound is the relative velocity multiplied by the 30 cosine of the angle of insonation. Blood is made up of numerous cells, which typically travel with a range of WO 2008/098284 PCT/AU2008/000165 2. velocities dependant on the turbulence of the blood flow at the point of interest. Therefore, when analysing blood flow using a TCD machine, the reflected signal received by the probe is typically complex. Known fast Fourier transform techniques are typically performed on the received waveform in order to decompose the signal into the 5 set of velocities from which it was formed. These may be displayed visually with velocity on the y-axis and time on the x-axis. The intensity of the signal is typically shown as different colours; usually with white as the most intense signal. High Intensity Transients (HITS) are short, intense signals that may be generated by particulate material travelling through the Doppler sample gate. In a typical TCD 10 machine output, HITS may be recorded individually, or in large groups that are referred to as "flurries", "showers" or "curtains". Such events may overload the Doppler spectra, producing a "white-out" of signal. For some medical applications it is advantageous to quantify the number of HITS within the signal generated by TCD machines. The known prior art methods for doing so focus analysis upon the sound signal received at the probe. 15 Using such prior art techniques, it is typically only possible to detect single particles, "flurries", "showers" or "curtains". However, using this prior art system the number of particles that may have been present in any given "flurry", "shower" or "curtain" cannot be determined. This limits the effectiveness of the TCD machine as a prognostic tool. 20 SUMMARY OF THE INVENTION It is an object of the present invention to overcome, or substantially ameliorate, one or more of the disadvantages of the prior art, or to provide a useful alternative. In accordance with a first aspect of the invention there is provided a computer implemented method of processing a target pixel of an image to designate whether the 25 target pixel defines an edge exceeding a threshold intensity value, the target pixel being adjacent eight neighbouring pixels and each of the target and neighbouring pixels having a respective intensity value, the method including the steps of: determining whether the respective intensity values of the target pixel and each of the neighbouring pixels exceed the threshold intensity value; and 3 0 designating that the target pixel defines an edge if the intensity value of the target pixel and the respective intensity values of no more than six of the WO 2008/098284 PCT/AU2008/000165 3. neighbouring pixels exceed the threshold intensity value; otherwise, designating that the target pixel does not define an edge. Preferably the method further includes the step of defining a respective integral value for the target pixel and each of the neighbouring pixels, wherein each of said 5 respective integral values are dependent upon whether the respective intensity value of the respective pixel exceeds the threshold intensity value. In a preferred embodiment a threshold edge detection value is defined. In this embodiment, the respective integral values are selected such that an aggregate value calculated from a summation of the respective integral values is greater than, or equal to, 10 the threshold edge detection value only if the respective intensity values of the target pixel and no more than six of the neighbouring pixels exceed the threshold intensity value. In a preferred embodiment at least some steps of the method are performed in accordance with a cellular neural network processing model. In this preferred 15 embodiment, the cellular neural network processing model defines a receptive field having a two dimensional array of three rows and three columns, the receptive field receiving as input the respective intensity values of the target pixel and the eight neighbouring pixels. In accordance with a second aspect of the present invention there is provided a 2 0 computer implemented method of processing an image having a plurality of pixels so as to designate at least one edge exceeding a threshold intensity value, each of the pixels having a respective intensity value, said method including the steps of: applying a method as described above to a target pixel selected from amongst the plurality of pixels and storing a resultant edge designation for the target pixel; 25 redefining the target pixel from amongst the plurality of pixels and repeating the preceding step until edge designations for substantially all of the pixels have been completed. Preferably the resultant edge determinations are stored in a two-dimensional Boolean array, which in some embodiments is displayable as an image.
WO 2008/098284 PCT/AU2008/000165 4. According to a third aspect of the invention there is provided a computer implemented method of inferring a particle count from a signal image having a plurality of pixels, each of the pixels having a respective intensity value, said method including the steps of: 5 processing the signal image using an edge designation algorithm so as to designate edges exceeding a threshold intensity value; formulating an array representative of the edges; processing the array so as to detect peaks defined by said edges; counting the number of peaks; and 10 using a correlation scheme responsive to the number of peaks to infer a particle count. Preferably the edge designation algorithm is a computer implemented method as described above with reference to the first and/or second aspect of the invention. In one preferred application of the invention the signal image is derived from an 15 analysis of a blood flow within a patient. In particular, the signal image may be the output from a transcranial Doppler machine. In one preferred embodiment the array is defined within a plurality of memory addresses and/or data storage addresses. In this preferred embodiment the array is two dimensional and is displayable as an image. 20 Preferably the correlation scheme defines a one-to-one relationship between peaks and particles. According to a fourth aspect of the invention there is provided a computer readable medium containing computer executable code for instructing a computer to perform the method according to any one of the preceding aspects. 25 According to a fifth aspect of the invention there is provided a downloadable or remotely executable file or combination of files containing computer executable code for instructing a computer to perform a method according to any one of the preceding aspects.
WO 2008/098284 PCT/AU2008/000165 5. According to another aspect of the invention there is provided a computing apparatus having a central processing unit, associated memory and storage devices, and input and output devices, said apparatus being configured to perform a method according to any one of the preceding aspects. 5 Any discussion of documents, acts, materials, devices, articles or the like which has been included in this specification is solely for the purpose of providing a context for the present invention. It is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention as it existed in Australia or elsewhere before the priority date of this 10 application. Throughout this specification the word "comprise", or variations thereof such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. 15 The features and advantages of the present invention will become further apparent from the following detailed description of preferred embodiments, provided by way of example only, together with the accompanying drawings. BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS 20 Figure 1 is a schematic depiction of an array of nine pixels; Figure 2 is a sample of a raw image as produced by a TCD machine; Figure 3 is a first sample of spectral image data extracted from an image produced by a TCD machine; Figure 4 is a second sample of spectral image data extracted from an image 25 produced by a TCD machine; Figure 5 is a sample of the edge designation array image arising from an analysis of the spectral sample of figure 3; Figure 6 is an example of the edge designation array image arising from an analysis of the spectral sample of figure 4; WO 2008/098284 PCT/AU2008/000165 6. Figure 7 is an example of an edge designation array; Figure 8 is an example of a height array; Figure 9 is a flow chart showing steps performed in an embodiment of the invention from an end-user's perspective; and 5 Figure 10 is a flow chart showing general processing steps performed in an embodiment of the invention. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION 10 A preferred embodiment of the present invention uses a computer to process a target pixel of an image. The image is formed from a square or rectangular array of pixels, each of which has an intensity value lying within the range of 0 to 255 inclusive. The pixels making up the image typically vary in intensity values from regions having a relatively lower intensity value to regions having a relatively higher intensity value. The 15 preferred embodiment aims to designate whether the target pixel lies on an edge at which the intensity starts to exceed a threshold intensity value. This processing is performed with reference to not only the target pixel, but also the adjacent eight neighbouring pixels. That is, the processing takes place with reference to a 3 by 3 array as shown schematically in figure 1, in which the pixels are labeled for ease of reference 20 as pixels 1 to 9. The target pixel is pixel 5 and its neighbouring eight pixels are pixels 1 to 4 and pixels 6 to 9. The processor compares the intensity value of the target pixel to the threshold intensity value. If the intensity value of the target pixel is less than the threshold intensity value, then the pixel is designated as not being an edge pixel. However, if the 25 intensity value of the target pixel is greater than the threshold value, the processor proceeds to compare the respective intensity values of the eight neighbouring pixels to the threshold intensity value. If the respective intensity values of no more than six of the eight neighbouring pixels exceeds the threshold intensity value, then the target pixel is designated as an edge pixel. Alternatively, if the respective intensity values of seven or 3 0 eight of the eight neighbouring pixels exceeds the threshold intensity value, then the WO 2008/098284 PCT/AU2008/000165 7. target pixel is designated as not being an edge pixel. The various possibilities are shown below in table 1. Condition Target Pixel Designation The intensity value of the target pixel is less than Not an edge pixel the threshold intensity value The intensity value of the target pixel is greater An edge pixel than the threshold intensity value and the intensity values of no more than six of the neighbouring pixels exceed the threshold intensity value The intensity value of the target pixel is greater Not an edge pixel than the threshold intensity value and the intensity values of more than six of the neighbouring pixels exceed the threshold intensity value Table 1 5 For the following example, the threshold intensity is arbitrarily set to a value of 150. Some arbitrary example intensity values for each of the pixels 1 to 9 are shown on figure 1. With reference to these example intensity values, the processor compares the intensity value of target pixel 5, which is 160, to the threshold intensity value, which is 150, and thereby determines that the intensity value of target pixel 5 exceeds the 10 threshold intensity value. Next the processor compares the intensity values of the eight neighbouring pixels 1 to 4 and 6 to 9 and determines that six out of eight of those values exceed the threshold of 150 (i.e. pixels 1, 2, 4, 7, 8 and 9). On this basis, the processor designates the target pixel 5 of this example as an edge pixel. Another embodiment of the invention implements a cellular neural network 15 processing model. This model has a receptive field in the form of a two dimensional, 3 by 3 array of cells, which corresponds to the 3 by 3 array of pixels centered on the target pixel. The receptive field receives as input the respective intensity values of the corresponding pixels. Each of the cells has an internal state called a "potential", which, WO 2008/098284 PCT/AU2008/000165 8. in the preferred embodiment, is pre-set to -60V. Cells can receive and send "information', in the form of a positive or negative potential to their immediate neighbours. This positive or negative value is added to the target cell's internal potential. The potential available to be sent by each of the cells is determined by the 5 processor based upon the position of the cell within the receptive field and upon the intensity values of the corresponding nine pixels. In the preferred embodiment, the values of the resting potentials, and the amounts by which potentials may be varied, are all either positive or negative integral values. The central cell starts at the resting potential of -60V, however if the intensity 10 value of the target pixel is greater than the threshold intensity value, then the centre cell can increase its potential by 160V to 100V. Each of the outer eight cells has the capacity to send a negative potential of 1IV to the central cell. This will only occur for any one of the eight outer cells if the intensity value of the corresponding pixel is greater than the threshold intensity value. 15 Once the eight outer cells have sent their "information" to the central cell, an aggregate value is determined, which is the resultant potential of the central cell. The processor compares the aggregate value against a threshold edge detection value, which in the preferred embodiment is 30V. This determines whether or not the target pixel should be designated as an edge pixel. If the aggregate value is greater than 30V, then 20 the target pixel is designated as an edge pixel. If the potential of the central cell is less than 30V, then the target pixel is designated as not an edge pixel. It will be appreciated by those skilled in the art that the integral values of -60V, 160V, -11V and 30V, as used in the preferred embodiment, have been selected to ensure that if no more than six surround cells send a negative value, and the centre cell sends a positive value, then the 25 aggregate value will be above the threshold edge detection value. On the other hand, the aggregate value will be less than the threshold edge detection value in the following circumstances: * if the intensity value of the target pixel does not exceed the threshold intensity value; and / or 30 e if the intensity values of more than six of the neighbouring pixels exceed the threshold intensity value.
WO 2008/098284 PCT/AU2008/000165 9. Advantageously, this cellular neural network approach makes use of integral mathematics and it will be appreciated by those skilled in the art that computers are adapted to perform integral mathematics at a far higher rate as compared to the processing of floating point arithmetic. 5 Applying this cellular neural network model to the example pixel intensity values shown in figure 1, the centre cell's potential is increased by 160V from a resting potential of -60V due to the target pixel having an intensity value in excess of the threshold intensity value. However, the centre cell's potential is decreased by 6 multiplied by 1IV since six of the surrounding eight pixels have an intensity value 10 greater than the threshold intensity value. That is, the value of the centre cell's potential (i.e. the aggregate value) is calculated as follows: -60V +160V - (6 * 11V) = 34V Hence, as the aggregate value is greater than the threshold edge detection value of 30V, the processor designates the target pixel to be an edge pixel. 15 The preferred embodiment may also be used to extract edges that are less than the threshold intensity. One technique for doing so is to start by inverting the image (i.e. transposing high intensity pixels into low intensity pixels and vice versa). Another technique is to alter the mathematics of the analysis so as to focus upon edges that are less than the threshold. 20 The method described above to process an edge designation for a single target pixel may be used iteratively to process edge designations for all, or substantially all, of the pixels within an image. Once an edge designation for an initial target pixel selected from amongst the plurality of pixels has been completed, the result is stored in a digitally accessible format, such as in the computer's random access memory, or some other 25 digital storage medium. The target pixel is then redefined and the process is repeated. In the preferred embodiment the re-definition of the target pixel involves indexing along one of the axes of the image, for example the x-axis, and performing an edge designation for each pixel on that axis. Once edge designations are complete for the first row of the x-axis, the processor indexes to the next row of the x-axis and repeats 30 the edge designation process for that row. This process continues along adjacent rows until edge designations for all, or substantially all, of the pixels within the image have been completed. In the preferred embodiment, no edge designations are processed for WO 2008/098284 PCT/AU2008/000165 10. the pixels in the far top and bottom rows and the far left and right hand columns, as they are not surrounded by a full compliment of eight neighbouring pixels. Each edge determination can have only one of two possible results; either the target pixel is an edge pixel, or it is not. Hence, the results of all edge determinations for 5 a given image may be stored in a two-dimensional Boolean array having substantially the same dimensions as the original image. If the pixel at position (x,y) of the image is designated as an edge pixel, then a value of 1 is stored in the (x,y) position of the Boolean array. Alternatively, if the pixel at position (x,y) of the image is designated as not an edge pixel, then a value of 0 is stored in the (x,y) position of the Boolean array. It 10 will be appreciated by those skilled in the art that this type of two dimensional Boolean array is displayable as an image simply by depicting a 0 value in one colour (for example white) and a 1 value in another colour (for example black). This allows a user to display an image of the edges extracted from the original image. With reference to figures 2 to 10, another preferred embodiment of the 15 invention is a computer implemented method of inferring a particle count from a signal image. The general steps performed by the processor in this method are depicted in figure 10. As mentioned previously, TCD images are derived from an interaction of sound waves with blood flow within a patient. The preferred embodiment is particularly adapted for use in analyzing the signal images that are produced by TCD machines so as 20 to infer a count of the number of emboli in the patient's blood that flowed past the TCD sensor during formation of the TCD image. The images comprise a plurality of pixels, each of which has a respective intensity value that is representative of the intensity of the signal received by the probe of the TCD machine. The processor commences the method by receiving image data 20 from a TCD 25 machine. An example of such image data 20 is shown in figure 2. In one embodiment the processor is programmed to automatically extract the Doppler spectral data portion of this image based upon a known standard positioning of the spectral data within the overall image data. In other embodiments, the user manually inputs the coordinates of the spectra data, for example by using a mouse to point at the edges of the spectral data 30 sections of the overall image. This step in the processing is shown at step 21 of figure 10. In each case, this allows the following processing to focus upon the spectral data WO 2008/098284 PCT/AU2008/000165 11. and ignore the extraneous data typically also included in a TDC image. Examples of the spectral data portions that are extracted for further analysis are given in figures 3 and 4. The processor then defines a two-dimensional image intensity array within the computer's random access memory or other data storage means. The size of the image 5 intensity array corresponds to the size of the extracted spectral image 20 such that each of the dimensions of the image intensity array correspond to the number of pixels in the x and y axes respectively. In other words, if the extracted spectral image is 300 pixels by 200 pixels, then an image intensity array also having 300 columns and 200 rows is defined within the computer's memory. The intensity values of each of the pixels are 10 then stored in each of the corresponding memory locations of the image intensity array. Some embodiments implement step 22 as shown in figure 10 wherein parameters for correlating a particular colour with an intensity value are pre-set based on a known range of colours in the TCD machine output. Another embodiment utilizes step 23 in which parameters for correlating a particular colour with an intensity value are dynamically 15 calculated based upon an analysis of the spectral image data. In another embodiment, the computer dispenses with the need for an image intensity array and instead reads the value of the various pixel's intensities directly from the image during the subsequent processing steps. In yet another embodiment, the computer receives the intensity values of each pixel in the signal image in real time as 20 the signal image is being produced by the TCD machine. In such embodiments the computer typically analyses the received data stream to infer a particle count in real time. The next step is the decomposition of the image into edges at step 24 of figure 10. This commences with the definition of a threshold intensity value that is to be used 25 in the edge detection analysis. One preferred embodiment of the invention provides the operator with an opportunity to select a pixel from the TCD image. This is done with the use of a suitable input device such as a mouse, with which the user points and clicks on the selected image pixel. The intensity of the selected pixel is ascertained by the processor and the threshold intensity value is defined as equal to the ascertained value. 3 0 When selecting a suitable pixel for the definition of the threshold image intensity value, the user should select a pixel having an intensity that is representative of the typical type WO 2008/098284 PCT/AU2008/000165 12. of High Intensity Transient that the method is ultimately seeking to correlate with a particle count. In another embodiment, the threshold intensity value is calculated automatically from a mathematical analysis of the intensity values of the pixels of the image. For 5 example, in one such embodiment, the computer calculates an image intensity value that is 7db of the background noise and the threshold intensity value defined as the result. In yet other embodiments of the invention, the computer firstly determines which type of TCD machine produced the signal image that is to be analysed. This determination may be either automated, based upon an analysis of the image, or it may 10 be responsive to user input. The computer then polls a database in which pre-determined threshold values are stored for each of the known TCD machines that are commercially available. In some embodiments the database is stored locally on the computer. In yet other embodiments, the database is remotely accessible, for example via the internet. Once the threshold intensity value has been established, the processor applies an 15 edge designation algorithm to the data stored within the image intensity array so as to designate edges exceeding the threshold intensity value. In one preferred embodiment the edge designation algorithm is in accordance with the method described above with reference to figure 1. Alternative embodiments make use of prior art edge designation algorithms. 20 At step 25 the results of the edge designation algorithm are stored in a two dimensional edge designation array in which the values are representative of the edges. The size of this edge designation array substantially corresponds to the size of the original signal image. More particularly, this edge designation array has two less rows and two less columns than the original signal image, since the outer-most rows and 25 columns of the signal image are not processible using the edge designation algorithm described above. The values within the edge designation array are defined within a plurality of memory addresses and/or data storage addresses. The values stored in this edge designation array form a Boolean data set in which the value 0 is associated with a pixel that is not designated as an edge and the value 1 is associated with a pixel that is 30 designated as an edge. The state of this edge designation array may be represented graphically at step 26 by displaying an array of pixels corresponding to the edge designation array, in which edge designation array positions having a 0 value are shown WO 2008/098284 PCT/AU2008/000165 13. in a light colour and edge designation array positions having a 1 value are shown in a dark colour. Examples of displays arising from such edge designation arrays are shown in figures 5 and 6. The display shown in figure 5 corresponds to the edges designated from the signal image shown in figure 3. Similarly, the display shown in figure 6 5 corresponds to the edges designated from the signal image shown in figure 4. At step 27 the edge designation array is processed so as to detect the peaks defined by the edges. As the peaks are detected, they are counted at step 28. To provide a worked example of one embodiment of the peak detection and counting process, a sample edge designation array of six columns by 5 rows is depicted in figure 7. The 10 processor commences a transformation of the two dimensional edge designation array into a one dimensional height array by sampling the value that the edge designation array stores at the first row of the first column. This position of the edge designation array corresponds to the edge designation for the top left corner of the signal image. The processor then increments along y coordinates, keeping x constant, until a value of 1, 15 indicating an edge, is found. In other words, the y coordinate of the highest edge depicted in the edge designation array along the line x = 0 is determined. It will be appreciated that the value of the y coordinate is inversely proportional to the height of the edge. The value of this y coordinate is stored in the first position of the one dimensional height array. The x coordinate is then indexed and the y coordinate for the 20 highest edge depicted along line x=1 is determined and stored in the one dimensional height array. This process is continued until the y coordinate for the highest edge designation along every column of the edge designation array is determined and stored in the one dimensional height array. Hence, the example edge designation array shown in figure 7 transforms into the following one dimensional height array, with the values in 25 the array being inversely proportional to the height of the respective edge: {3,1,3,2,1,3} The one dimensional height array is then processed so as to detect and count the peaks. This commences with the definition of a peak counter variable that is initially set to a zero value. The processor indexes its way along the one dimensional height array 3 0 and for each position 'n' within the one dimensional array, checks whether the value of the one dimensional array at position 'n-i' is greater than the value of the one dimensional array at position 'n'. If this relationship tests positive (i.e. if the height at WO 2008/098284 PCT/AU2008/000165 14. the position to the left of position 'n' is lower than the height at position 'n'), then the processor checks whether the value of the one dimensional array at position 'n+l' is greater than the value of the one dimensional array at position 'n'. If this relationship also tests positive (i.e. if the height at the position to the right of position 'n' is lower 5 than the height at position 'n') then a peak is detected and the peak counter variable is incremented by one. Hence, the example one dimensional array of {3,1,3,2,1,3} is processed to have two peaks (at the second and fifth positions) and the peak counter variable therefore has a value of two at the conclusion of the processing of the one dimensional height array for this example. The peaks that have been detected may be 10 shown in a display at step 29. Another embodiment of the invention utilizes a cellular neural network to detect and count the peaks. The receptive field is a one dimensional cellular array that is five cells long and the values from five of the positions of the one dimensional height array is provided as input to the receptive field. Each cell has an initial resting potential of -30V. 15 From this resting potential, the potential of the centre cell is increased by 95V to a value of 65V. The potential of the centre cell is then decreased by 13V for each of the surrounding cells that has a value less than, or equal to, the value of the centre cell. If the resultant potential of the centre cell exceeds a peak detection threshold of 40V, then a peak is detected and the peak counter variable is incremented. If the resultant potential 2 0 of the centre cell is less than the peak detection threshold, then a peak is not detected and the peak counter variable is not incremented. The processor indexes along the one dimensional height array, applying this cellular neural network approach to each of the positions of the one dimensional height array. For the example one dimensional height array shown in figure 8, the value at 25 only one of the surrounding four positions exceeds that of the central position. Hence, once these values are fed into the cellular neural network, the calculation for the potential of the central cell proceeds as follows,: -30V + 95V - (1 * 13V) = 52V Hence, as 52V exceeds the peak detection threshold of 40V, a peak would be 3 0 detected at the centre position of the height array shown in figure 8. Once again, this implementation of a cellular neural network advantageously uses integral mathematics to speed processing as compared to processing requiring floating point arithmetic. This WO 2008/098284 PCT/AU2008/000165 15. cellular neural network approach also advantageously tends to smoothen ragged peaks and is preferable to simply taking an average since outlying values do not skew the mean. Once the number of peaks has been counted, the result is stored in the database 5 at step 30. The processor then uses a correlation scheme responsive to the number of peaks to infer a particle count. In the preferred embodiment the correlation scheme is a one-to-one relationship between the number of peaks and the number of particles. In other words, the number of particles inferred by the preferred embodiment from its analysis of the signal image is equal to the number of peaks counted. However, in other 10 embodiments the correlation scheme involves multiplying the number of peaks by a factor (which may be greater or less than one) to calculate the number of inferred particles. This factor can be used to compensate for the over or under detection of HITS in the signal image. Figure 9 illustrates the process flow from an user's perspective. The user enters 15 the patient's details at step 40 and these details are stored in the database 41. At step 42 and the user identifies the type of TCD machine which prepared the signal images being analysed. This allows the processor to access the appropriate data required to analyse the image data 20. At step 43 the user selects the image data files that are to be analysed and at steps 44 and 45 the processor performs the processing and counting of particles. 20 Once this processing is complete, a pointer to the source image, a pointer to the transformed image and the final particle count are stored in the database 41. Preferred embodiments of the invention may be implemented on a range of computing platforms. The preferred embodiment utilizes a computing apparatus configured to perform the various processing steps. This computing apparatus has a 25 central processing unit (CPU) capable of executing software that is written in at least one of many known programming languages; associated memory, for example RAM and ROM; storage devices such as hard drives, writable CD ROMS and flash memory; input devices such as a keyboard and mouse; output devices, for example a printer; a display in the form of a screen and a communications link in the form of a modem. It will be 3 0 appreciated that the actual platform upon which the invention is implemented will vary depending upon factors such as the amount of processing power required. In some embodiments the computing apparatus is a stand alone computer, whilst in other WO 2008/098284 PCT/AU2008/000165 16. embodiments the computing apparatus is formed from a networked array of interconnected computers. It will be appreciated by those skilled in the art that the present invention may be embodied in computer software in the form of executable code for instructing a computer 5 to perform the inventive method. The software and its associated data are capable of being stored upon a computer-readable medium in the form of one or more compact disks. Alternative embodiments make use of other forms of digital storage media, such as Digital Versatile Discs (DVD's), hard drives, flash memory, Erasable Programmable Read-Only Memory EPROM, and the like. Alternatively the software and its associated 10 data may be stored as one or more downloadable or remotely executable files that are accessible via a computer communications network such as the internet. While a number of preferred embodiments have been described, it will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention without departing from the spirit or scope of the invention 15 as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims (4)

17. THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS: 1. A computer implemented method of processing a target pixel of an image to designate whether the target pixel defines an edge exceeding a threshold intensity value, the target 5 pixel being adjacent eight neighbouring pixels and each of the target and neighbouring pixels having a respective intensity value, the method including the steps of: determining whether the respective intensity values of the target pixel and each of the neighbouring pixels exceed the threshold intensity value; and designating that the target pixel defines an edge if the intensity value of the 10 target pixel and the respective intensity values of no more than six of the neighbouring pixels exceed the threshold intensity value; otherwise, designating that the target pixel does not define an edge. 2. A method according to claim 1 further including the step of defining a respective 15 integral value for the target pixel and each of the neighbouring pixels, wherein each of said respective integral values are dependent upon whether the respective intensity value of the respective pixel exceeds the threshold intensity value. 3 A method according to claim 2 further including defining a threshold edge detection 20 value. 4. A method according to claim 3 wherein the respective integral values are selected such that an aggregate value calculated from a summation of the respective integral values is greater than, or equal to, the threshold edge detection value only if the 25 respective intensity values of the target pixel and no more than six of the neighbouring pixels exceed the threshold intensity value. 5. A method according to any one of the preceding claims wherein at least some steps are performed in accordance with a cellular neural network processing model. 30 6. A method according to claim 5 wherein the cellular neural network processing model defines a receiptive field having a two dimensional array of three rows and three WO 2008/098284 PCT/AU2008/000165
18. columns, the receiptive field receiving as input the respective intensity values of the target pixel and the eight neighbouring pixels. 7. A computer implemented method of processing an image having a plurality of pixels 5 so as to designate at least one edge exceeding a threshold intensity value, each of the pixels having a respective intensity value, said method including the steps of: applying a method according to any one of the claims 1 to 6 to a target pixel selected from amoungst the plurality of pixels and storing a resultant edge designation for the target pixel; 10 redefining the target pixel from amoungst the plurality of pixels and repeating the preceding step until edge designations for substantially all of the pixels have been completed. 8. A method according to claim 7 wherein the resultant edge determinations are stored in 15 a two-dimensional boolean array. 9. A method according to claim 8 wherein the two dimensional boolean array is displayable as an image. 2 0 10. A computer implemented method of inferring a particle count from a signal image having a plurality of pixels, each of the pixels having a respective intensity value, said method including the steps of: processing the signal image using an edge designation algorithm so as to designate edges exceeding a threshold intensity value; 25 formulating an array representative of the edges; processing the array so as to detect peaks defined by said edges; counting the number of peaks; and using a correlation scheme responsive to the number of peaks to infer a particle count. 30 11. A method according to claim 10 wherein said edge designation algorithm is a computer implemented method according to any one of claims 1 to 10. WO 2008/098284 PCT/AU2008/000165
19. 12. A method according to claim 10 or 11 wherein the signal image is derived from an analysis of a blood flow within a patient. 5 13. A method according to claim 12 wherein the signal image is output from a transcranial Doppler machine. 14. A method according to any one of claims 10 to 13 wherein the array is displayable as an image. 10 15. A method according to any one of claims 10 to 14 wherein the array is defined within a plurality of memory addresses and/or data storage addresses. 16. A method according to any one of claims 11 to 15 wherein the array is two 15 dimensional. 17. A method according to any one of claims 11 to 16 wherein the correlation scheme defines a one-to-one relationship between peaks and particles. 20 18. A computer-readable medium containing computer executable code for instructing a computer to perform the method according to any one of claims 1 to 17. 19. A downloadable or remotely executable file or combination of files containing computer executable code for instructing a computer to perform a method according to 25 any one of claims I to 17.
20. A computing apparatus having a central processing unit, associated memory and storage devices, and input and output devices, said apparatus being configured to perform a method according to any one of claims 1 to 17. 30
AU2008215155A 2007-02-14 2008-02-11 An edge detection method and a particle counting method Abandoned AU2008215155A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2008215155A AU2008215155A1 (en) 2007-02-14 2008-02-11 An edge detection method and a particle counting method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AU2007900707A AU2007900707A0 (en) 2007-02-14 An edge detection method and a particle counting method
AU2007900707 2007-02-14
PCT/AU2008/000165 WO2008098284A1 (en) 2007-02-14 2008-02-11 An edge detection method and a particle counting method
AU2008215155A AU2008215155A1 (en) 2007-02-14 2008-02-11 An edge detection method and a particle counting method

Publications (1)

Publication Number Publication Date
AU2008215155A1 true AU2008215155A1 (en) 2008-08-21

Family

ID=39689536

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2008215155A Abandoned AU2008215155A1 (en) 2007-02-14 2008-02-11 An edge detection method and a particle counting method

Country Status (2)

Country Link
AU (1) AU2008215155A1 (en)
WO (1) WO2008098284A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2009270759B2 (en) 2008-07-17 2015-10-01 Luminex Corporation Methods, storage mediums, and systems for configuring classification regions within a classification matrix of an analysis system and for classifying particles of an assay
JP5428272B2 (en) * 2008-10-06 2014-02-26 東ソー株式会社 Survivin mRNA measurement method
WO2012009617A2 (en) 2010-07-16 2012-01-19 Luminex Corporation Methods, storage mediums, and systems for analyzing particle quantity and distribution within an imaging region of an assay analysis system and for evaluating the performance of a focusing routing performed on an assay analysis system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8701391D0 (en) * 1987-01-22 1987-07-29 British Aerospace Imaging processing techniques
US7136515B2 (en) * 2001-09-13 2006-11-14 Intel Corporation Method and apparatus for providing a binary fingerprint image
EP1569170A4 (en) * 2002-12-05 2007-03-28 Seiko Epson Corp Characteristic region extraction device, characteristic region extraction method, and characteristic region extraction program

Also Published As

Publication number Publication date
WO2008098284A1 (en) 2008-08-21

Similar Documents

Publication Publication Date Title
Kovesi Image features from phase congruency
Handegard et al. Automated tracking of fish in trawls using the DIDSON (Dual frequency IDentification SONar)
Trzcinska et al. Spectral features of dual-frequency multibeam echosounder data for benthic habitat mapping
Abeyratne et al. On modeling the tissue response from ultrasonic B-scan images
CN101052991A (en) Feature weighted medical object contouring using distance coordinates
DE102016100367A1 (en) Sparse tracking in sound beam intensity impulse imaging
Chainais Infinitely divisible cascades to model the statistics of natural images
Innangi et al. High resolution 3-D shapes of fish schools: A new method to use the water column backscatter from hydrographic MultiBeam Echo Sounders
Conolly Spatial interpolation
AU2008215155A1 (en) An edge detection method and a particle counting method
Adams Contour mapping and differential systematics of geographic variation
Kerut et al. Review of methods for texture analysis of myocardium from echocardiographic images: a means of tissue characterization
US11113851B2 (en) Correction of sharp-edge artifacts in differential phase contrast CT images and its improvement in automatic material identification
Roxborough et al. Tetrahedron based, least squares, progressive volume models with application to freehand ultrasound data
Peña Full customization of color maps for fisheries acoustics: Visualizing every target
Fakiris et al. Quantification of regions of interest in swath sonar backscatter images using grey-level and shape geometry descriptors: The TargAn software
Alhadidi et al. cDNA microarray genome image processing using fixed spot position
Amiri et al. Segmentation of ultrasound images based on scatterer density using U-Net
Vicas et al. Usefulness of textural analysis as a tool for noninvasive liver fibrosis staging
Alipoor et al. A novel logarithmic edge detection algorithm
Bons et al. Image analysis of microstructures in natural and experimental samples
JPH0282947A (en) Method for detecting and analyzing skin surface conformation
Matrecano Porous media characterization by micro-tomographic image processing
Hudaib et al. New methodology for microarray spot segmentation and gene expression analysis
Shai et al. Spatial bias removal in microarray images

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period