WO2016049681A1 - Système et procédé de traitement d'image ultrasonore - Google Patents

Système et procédé de traitement d'image ultrasonore Download PDF

Info

Publication number
WO2016049681A1
WO2016049681A1 PCT/AU2015/000591 AU2015000591W WO2016049681A1 WO 2016049681 A1 WO2016049681 A1 WO 2016049681A1 AU 2015000591 W AU2015000591 W AU 2015000591W WO 2016049681 A1 WO2016049681 A1 WO 2016049681A1
Authority
WO
WIPO (PCT)
Prior art keywords
intensity value
classification
pixel
image frame
pixels
Prior art date
Application number
PCT/AU2015/000591
Other languages
English (en)
Inventor
Andrew Medlin
Original Assignee
Signostics Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2014903919A external-priority patent/AU2014903919A0/en
Application filed by Signostics Limited filed Critical Signostics Limited
Publication of WO2016049681A1 publication Critical patent/WO2016049681A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20156Automatic seed setting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present invention relates to an ultrasound imaging system and method for identifying a seed point within an ultrasound image.
  • an embodiment of the present invention may be used to identify a seed point of an anatomical feature, such as a biological organ, represented in an ultrasound image to permit segmentation of that feature.
  • bladder volume estimation via ultrasound scanning is to estimate the volume of fluid inside the bladder of the patient.
  • Non-invasive bladder volume measurement techniques with ultrasound sonography have been described in the art. These techniques may rely on one or more cross-sectional area measurements of the bladder. To obtain such cross-sectional imaging, an ultrasound beam is swept electronically or mechanically through a cross-section to be imaged. Return echoes are presented as intensity modulated dots on a display, giving the well-known ultrasound sector scan display.
  • Bladder volume may be calculated based on bladder contours obtained in two orthogonal planes with a geometric assumption of bladder shape. For 3-dimensional or volumetric sonography an ultrasound beam has to be swept through the entire organ. This further increases complexity, acquisition time of the data, and costs of the instrument.
  • HAKENBERG ET AL "THE ESTIMATION OF BLADDER VOLUME BY SONOCYSTOGRAPHY", J Urol, Vol 130, pp 249-251 , reported a simple method for calculating bladder volume based on measuring the diameters of the bladder from a cross sectional image taken along the midline sagittal bladder plane only. These diameters give the height and depth of the bladder at the scan plane.
  • the bladder volume is estimated as the product of the height and depth multiplied by an empirically derived constant.
  • the above approach led to a method of performing one or more two-dimensional diagnostic ultrasound 'B' scans to produce images of one or more cross sections through the structure whose volume is of interest, such as the bladder, and then to make several standard reference measurements of that imaged structure which are then inserted into a formula to estimate the cross sectional area or volume as required.
  • transverse and a longitudinal (sagittal) scans are recorded and the height and width of the transverse image and the depth of the longitudinal one are manually measured, then multiplied together to produce a measure of the volume.
  • a scaling constant is usually also included within the calculation which then crudely models the volume of an oblate ellipsoid.
  • This crude model may have large inaccuracies since a bladder varies greatly in shape.
  • a single individual's bladder shape will vary according to the degree of filling, most closely approximating the model when significantly full. Between individuals, the shape will vary depending on a number of factors, which may change the actual bladder shape or the apparent shape as shown by an ultrasound scan. The presence or absence of a uterus will also change the shape, as will the prostate.
  • pathology of the bladder including haematoma, or of the surrounding organs, which may distort the bladder, will also affect the bladder shape.
  • WO 2010/06607 Another method and apparatus for determining the volume of an organ is described in WO 2010/066007.
  • the method disclosed in WO 2010/066007 may be applied to a single, static, greyscale B-mode ultrasound image after scanning has been completed for segmentation of a representation of a bladder in an ultrasound image.
  • the segmentation could be initiated by the user selecting to insert a bladder tag, then manually tapping or clicking on the image to nominate a "seed" point about which the segmentation would be centred.
  • the manual selection of the seed point is thus essential to the operation of the segmentation algorithm.
  • WO 2010/06607 uses a series of radial lines originating at the seed point, sampling the source image along such radial lines, processing this data in radial coordinates to enhance edge information, and then using a Kalman Filter to track the perceived bladder edge location around a circumference until a closed loop is formed representing the bladder outline. Since the segmentation algorithm requires a seed point nominated manually by a knowledgeable user, the accurate selection of a seed point is an essential pre-requisite for segmentation. Moreover, the method of WO 2010/066007 cannot function in real-time during scanning.
  • An aspect of the present invention provides an ultrasound imaging processing system and method for processing image information to detect a seed point for a segmentation process.
  • a method of processing ultrasound images to select a seed point is provided, including:
  • a seed point is selected from the one or more candidate seed points depending on selection criteria.
  • the received image frame is a downsampled source image frame.
  • adjusting the intensity value for each pixel depending on a classification of the respective intensity value includes: processing the received image frame to determine plural classification bands;
  • Processing the received image frame to determine plural classification bands may include:
  • assigning an adjusted intensity value to each pixel according to its classification includes transforming the intensity value of each pixel value according to a predefined function based on the respective pixel's classification, wherein each classification band has an associated predefined function for transforming the pixel intensity value.
  • an image processing system for processing an ultrasound image frame including an active area including a plurality of pixels, each pixel having a respective intensity value, the system including:
  • a system for processing ultrasound images including:
  • a real time image frame generator for receiving ultrasound data from an ultrasound probe and processing the receiving ultrasound data to generate a sequence of single image frames, each image frame including an active area including a plurality of pixels, each pixel having a respective intensity value;
  • an image frame processor for processing each single image frame received from the real time image frame generator to identify a seed point, said processing including adjusting the intensity value for each pixel depending on a classification of the respective intensity value to generate a coded image frame, and generating a 2D scalar potential field model of the coded image frame based on distributions of the adjusted intensity values, said 2D scalar potential field model for analysis to locate one or more candidate seed points for selection.
  • Figure 1 is a block diagram of an ultrasound scan system according to an embodiment of the present invention.
  • Figure 2A is a block diagram of a block diagram of an probe unit 102 suitable for use with an embodiment of the present invention
  • Figure 2B is a functional block diagram of a processing unit suitable for use with an embodiment of the present invention.
  • Figure 3 is a flow diagram of a processing sequence for processing image frames to identify a seed point according to an embodiment
  • Figure 4 is a flow diagram of a method of identifying a seed point in an image frame according to an embodiment
  • Figure 5 is a schematic diagram of an image frame of a raster image including an active scan area
  • Figure 6 shows an example pixel indexing convention applied to a section of an active scan area shown in Figure 5;
  • Figure 7 shows an example application of a discretization stencil at a specific location z, / overlaid with a section of the active cell region of Figure 6;
  • Figure 8 shows an example matrix structure 800 for use with an embodiment, with an area of the matrix shown in a magnified view
  • Figure 9 shows an example visualization showing a scalar potential superimposed over an original ultrasound image frame.
  • FIG. 1 there is shown a block diagram of an ultrasound scan system 100 including an ultrasonic probe unit 102, a processing system 104 in data communication with the probe unit 104, and a display unit 106.
  • the probe unit 102 includes a transducer head 108 and associated electronics adapted to transmit pulsed ultrasonic signals into a target body and to receive returned echoes from the target body.
  • the ultrasound scan system 100 transmits an ultrasound signal into the target body through the probe unit 102, and receives return signals or "echoes" reflected from the target body. Return signals are received by the probe unit 102 and processed by the processing system 104 to produce ultrasound data for generating image frames of an ultrasound image for display on the display unit 106 as a real-time two dimensional (2D) ultrasound image.
  • the ultrasound scan system 100 may generate an ultrasound image with respect to a region of interest (ROl) included in the target body, and display the generated ultrasound image with respect to the ROI.
  • the ultrasound scan system 100 may generate an ultrasound image including a representation of an anatomical feature, such as an organ, within the ROl, thereby enabling a user to ascertain properties of the organ.
  • the following description relates to a method for determining the "seed" point of a bladder as may be required, for example, for a segmentation process which determines the bladder volume.
  • method may be used for determining the "seed" point of any organ or body structure which will show a reasonably distinct perimeter in an ultrasound scan. This may include the abdominal aorta, the prostate or other organs.
  • the probe unit 102 may include a hand held ultrasonic probe unit.
  • the processing system 104 and/or the display unit may be located within the probe unit 102, or located separately.
  • the display unit 106 may include, for example, a touch screen allowing a user to control the functionality of the display unit 106 and the probe unit 102.
  • User controls 105 (ref. Fig. 2A) may be provided on the display unit 106, in the form of push buttons and a scroll-wheel. However, it is not essential that such user controls provided.
  • the ultrasonic transducer head 108 shown here includes a transducer arrangement 201 including one or more transducer elements which are controlled to transmit pulsed ultrasonic signals into a medium to be imaged and to receive returned echoes from the medium.
  • the transducer arrangement 201 includes eight transducer elements arranged in an annular array, although other arrangements are possible. It is also possible that a different number of transducer elements may be used.
  • the probe unit In use, the probe unit is held against the body of a patient adjacent to the internal part of the body which is to be imaged, with the transducer head 108 in contact with the patient's skin. Electronics 107 located in the probe unit 102 stimulate the emission of an ultrasound beam from the transducer elements of the transducer arrangement 201. This beam is reflected back to the transducer as echoes from the features to be imaged. The one or more transducer elements of the transducer arrangement 201 receive these echoes which are amplified and converted to digital scanline data. In use, the transducer arrangement 201 may be moved by an operator or by a motor so that it covers all of a selected planar area within the patient's body. The scanline data is then processed and assembled into an image frame for processing by the processing system 104.
  • the probe unit 102 includes probe unit electronics 107 in communication with transducer arrangement 201 via interface and control electronics 204.
  • the probe unit electronics 107 includes transmit pulser 202, low noise amplifiers 203, time gain amplifier 204, filters 205, 224, Analog to Digital converter 206, Digital Signal Processing device 208, Field Programmable Gate Array 207, HV supply 218, HV monitor 220, and Digital to Analog (DAC) converter 222.
  • DAC Digital to Analog
  • the transmit pulser 202 generates a short electrical pulse to create an oscillation in the one or more transducers elements of the transducer arrangement 201.
  • Each transducer element then generates an ultrasonic pressure pulse which is transmitted into the medium to be imaged.
  • eight transducer elements then receive any reflected ultrasonic pressure pulses and convert the received pressure pulse into received electrical signals.
  • Low noise amplifiers 203 then amplify the received electrical signals for further signal conditioning, which in the present case involves applying time gain amplification (TGA) 204, and filtering the output of the time gain amplifier 204 using a bandpass or low pass 205 to provide an analog output signal.
  • TGA time gain amplification
  • the analog output signal is then converted to a digital output via the A/D converter 206.
  • digital output values of the A/D converter 206 are input to a field programmable gate array (FPGA) 207 in a low voltage serial format to reduce the number of printed circuit board traces.
  • FPGA field programmable gate array
  • the input digital values are deserialised by the FPGA 207, preferentially delayed, to provide receive focussing, buffered and transferred to the digital signal processing (DSP) device 208 as raw scanline data.
  • DSP digital signal processing
  • the digital signal processing device 208 may processes each individually acquired scanline by applying a digital filter to the scanline data, detecting the envelope of the scan line data, downsampling the enveloped data, compressing the raw input data which is preferable 12-bits into a low number of bits, and storing the scanline for scan conversion by a scan converter.
  • the FPGA 207 awaits the appropriate time to transmit the next pulse and repeat the process. The timing of the next transmission of a pulse is thus controlled by the FPGA 207. Having acquired a set of scanline acquisitions covering an image area, the acquired scanlines are packaged and transmitted to the processing system 104 for seed point processing and display. In this respect, it is possible that the digital signal processing (DSP) device 208 may provide the below described functionality of the processing system 104.
  • DSP digital signal processing
  • FIG. 2B there is shown a functional block diagram for a processing systeml ()4 suitable for use with an ultrasound scan system 100 according to an embodiment of the present invention.
  • the processing system 104 may be implemented using hardware, software or a combination thereof and may be implemented in one or more computer systems or processing systems.
  • the functionality of the processing unit 104 may be provided by one or more computer systems capable of carrying out the desired functionality.
  • the processing system 104 includes a real-time (RT) image frame generator 230 and image frame processor 232.
  • RT real-time
  • the real-time image frame generator 230 receives acquired scanlines from the probe unit 102 as ultrasound data, and processes those scanlines to output a series of single image frames to image frame processor 232.
  • the image frame processor 232 includes a real-time seed point solver 234 and, optionally, an image segmenter 236.
  • the seed point solver 234 preferably implements a real-time seed point algorithm operating in a multithreaded computer processing environment to allow client asynchronous operations such as receiving the incoming image frames, divesting image frames to a current exam, and displaying each frame, and performing the real-time seed point algorithm and segmentation algorithm.
  • the segmentation algorithm relies on a nominated seed point in order to perform analysis and segmentation of the image.
  • the nominated seed point is computed algorithmically, rather than relying on the user to manually nominate a seed point.
  • Embodiments of the present invention may thus optimally and reliably identify a seed point in real time.
  • identifying the location of a potential bladder in an ultrasound image, and identifying a specific point indicative of its centre is difficult in ultrasound images due to the noisy nature of the images, and the presence of non-anatomical artefacts.
  • artefacts include, for example, reverberations, shadowing, speckle, and the like.
  • side-walls of the bladder can be quite weakly resolved. Seed point detection on realistic ultrasound images of the bladder should preferably cope with all these complicating factors.
  • a nominated seed point should be in a region which is a "dark region" (relative to the rest of the image frame);
  • a nominated seed point should be located around the centre of such a "dark region", not at its periphery;
  • a nominated seed point should be located at a plausible location in a scan area.
  • the seed point solver 234 will attempt to identify at step 302 an optimal seed point in a current single image frame 300, and output at most one seed point 306.
  • the seed point 306 is representative of the centre of a bladder (if present).
  • the image frame processor 232 (ref. Fig. 2B) awaits the arrival of the next image frame from the RT image frame generator 230 at step 304.
  • a seed point 306 it may be output to the image segmenter 236 as the seed point 306 for performing segmentation on the current image frame 300, if required.
  • the image segmenter 236 will then preferably output an overlay "segmentation" polygon for an image frame to produce a segmented image 308, which, in the case of a representation of a bladder, represents an estimated bladder outline which may be used to compute, for example, an estimated bladder volume.
  • a segmented image 308 could be out of date, that is, not based on the latest image frame 300 to be received from the RT image frame generator 230.
  • the output segmentation polygon for a respective image frame 300 may be determined to be out of date at step 3 10 and discarded at step 310, followed by a return to waiting at step 304 for a new image frame 300. Otherwise, the segmentation polygon is output to the display unit 106 and overlaid with the corresponding source image frame 300 potentially with an estimated bladder volume as computed from the segmentation polygon using conventional techniques.
  • the seed point solver 234 preferably operates on a single image frame 300 (that is, the input image frame 300) at a time, as received during real-time scanning, and outputs either none or a single seed point 306.
  • the real-time seed point algorithm implemented by the seed point solver 234 preferably includes an image pixel processing function 400, a field solver function 402, and a seed point processing function 401.
  • the image pixel processing operation 400 preferably processes pixels of a single image frame 300 to provide a processed image frame including a reduced set of topological features.
  • the image pixel processing operation 400 reduces the complexity of the single image frame 300 by classifying pixels of the single image frame 300 by intensity to provide, as an output, the processed image in the fonn of a coded image 412.
  • the field solver function 402 preferably receives the pre-processed pixels of the coded image 412 and applies them to a differential equation designed to have a smooth solution including local minima (or local maxima) at the centre of distributions of dark pixels (that is, "dark blobs") of the coded image 412.
  • the field solver function 402 then produces, as an output, a set (which may be empty) of candidate seed points based on the local minima (or local maxima) in the solution to the differential equation.
  • the seed point processing function 401 processes and filters the candidate seed points to either reject them all or nominate a single 'best' seed point from the set.
  • a current single image frame 300 is received and preferably downsampled at step 404 to produce a smaller image having the same aspect ratio and comparable topological and spectral information as the single image frame 300.
  • downsampling the image frame 300 reduces the number of active pixels in the scan area without sacrificing any more pixel resolution than necessary.
  • references to "active pixels" throughout this specification are to be understood to denote pixels in an active scan area of the single image frame 300.
  • active pixels are those pixels in the single image frame 300 whose values are derived from actual captured acoustic B-mode scan data.
  • the active pixels are a subset of all the pixels in a given image frame.
  • the downsampling step 404 is conducted without prior low pass filtering of the image.
  • this approach has potential to introduce aliasing artifacts or high frequency noise into the downsampled image, in embodiments of the present invention this is acceptable because:
  • the downsampling step 404 preferably applies an integer downsampling factor.
  • a downsampling factor of four is used, although it will be appreciated that other integers may be used. Suitable downsampling techniques would be well understood by a skilled person.
  • the computational complexity of a method according to embodiments of the present invention may be reduced, which is preferable for real-time operation where available processing power may be constrained.
  • downsampling each single image frame is preferable, it is not essential.
  • the downsampled image is filtered at step 406 using a suitable image filter to attenuate fine detail in the image while preserving edges and overall pixel intensity in bulk regions of the image frame.
  • a non-linear morphological image filter is applied to the downsampled image (or the source image if downsampling is not applied) to clean up fine details in the image, and produce a filtered image.
  • a morphological operation of image opening has been found to be effective for this purpose, although other similar operations may also be beneficial such as closing, erosion and dilation morphological operations.
  • a morphological opening operation has been found to remove small clusters of image signal and small scale speckle while preserving bulk image features and edges.
  • the image opening morphological operation is performed with a 3x3 mask.
  • any similar operation and mask size could be used which has the desired effect.
  • a non-linear morphological image filter is preferred, it will be appreciated that any suitable filer may be used. In this respect, whilst a variety of such filters could be used, it is expected that a simple linear low pass filter (smoothing filter) would not be preferred as it may to slow to remove fine detail and does not preserv e edges.
  • step 408 statistical analysis of the frequency of pixels of different intensity bands is performed to partition active pixels of the filtered image into different classification bands of pixel intensity.
  • a frequency histogram is used to determine suitable threshold pixel intensity values for classifying each pixel of the filtered image into one of plural values, with each band preferably having an associated value band category.
  • This frequency histogram counts the number of pixels at each level of intensity.
  • the histogram permits partitioning of the active pixels into ranges based on pixel counts.
  • the frequency histogram preferably has a range which covers the possible range of pixel intensity values and a sufficient resolution to discriminate the desired number of pixel bands. In this manner, the frequency histogram may be used to determine pixel value thresholds for, for example, anechoic pixels and midrange pixels, as will be described by way of the below example.
  • determining threshold pixel intensity values may involve designating the lowest (darkest) X% of the pixels as anechoic pixels, and the next darkest Y% as midrange pixels.
  • other values of X% and F% may be used.
  • By designating pixels in this way it may be possible to determine an anechoic threshold value, / admir, such that X% of all active pixels are below that value, and similarly a midrange threshold value, /, chorus, such that Y% of all active pixels are below that value.
  • all pixels having a pixel intensity value above the midrange threshold value, Im are deemed echogenic pixels (that is, pixels which are considered "bright").
  • three classification bands are provided to represent anechoic, mid grey and echogenic pixel values classifications.
  • the number of classification bands may be decreased to maximally simplify the final image, or increased to improve the discrimination between different bands of pixel values.
  • the active pixels that is, the pixels of the active scan area
  • the active pixels are individually processed to classify each pixel based on the anechoic and midrange thresholds (and thus according to which value band or partition they fall in) to adjust each intensity value by transforming the intensity value of each active image pixel value based on its classification to segment the image into value blocks based on the classification thresholds I a and / admir, described above.
  • Each classification band implements its own function transforming the active image pixel value, according to the intention and desired influence on the solution of pixels in that classification band.
  • active pixels p i:j are classified and processed using the anechoic and midrange thresholds according to the following rules:
  • the purpose of the classification step 410 is to control how each band of pixels influences a "final field solution". It is desirable to have some such level of control so that variations in pixel intensity can be mapped to a more abstract, coded representation.
  • This step can be thought of as a filtering of pixels in the intensity/amplitude domain.
  • all pixels falling in the band classified as "anechoic" are assigned an intensity value which is intended to have a maximal effect on the field solution, and thus their value is set to 0 (i.e. the darkest value).
  • each value band or partition can use its own such mapping function from input pixel value to final coded value.
  • processed pixel data is symmetrically inverted about 0, so that anechoic regions have a positive value and echogenic regions a negative value as follows:
  • Pi j Imid - Pi.j such that Pi j will have the value I mjd in anechoic regions, zero in mid-range regions and lesser in magnitude though slightly varying in echogenic regions.
  • the field solver function 402 receives the coded image pixels as an input to a 2D partial differential equation for a scalar potential that models effects of distributions of 'dark' pixels within the spatial domain of the scan area.
  • the applied 2D partial differential equation is linear in the energy potential and preferably dependent only on spatial coordinates of the scan area.
  • the solution for the potential will thus depend only on the spatial geometry of the scan area and the input pixel intensities.
  • this approach is preferred because it means the model can be discretized as a discretized differential equation and solved using tools of linear algebra, and the coefficients of the discretized differential equation will be constant for a given scan geometry.
  • the field solver function 402 includes a matrix solver 414 which receives as inputs the coded image pixels of the coded image 412, and the constant coefficients of the discretized differential equation, identified here as "Precomputed LU Data" 416.
  • the output of the matrix solver 402 is a set of values of the "energy potential" in the spatial domain of the scan, representing a 2D potential field 418.
  • the scalar potential model is insensitive to small local variations in the concentration, so that its solution is dominated by the large scale, bulk distributions of concentration.
  • the centre of dark or bright "blobs" in the input pixel intensities will correspond to local minima or maxima in the scalar potential.
  • a 2D Poisson equation is used, although it will be appreciated that alternate formulations may be possible.
  • the properties of a scalar potential described by a 2D Poisson equation are applied to detect central locations of "dark blobs" in a complex, noisy ultrasound image.
  • a scalar potential varies in space and the direction of maximum descent (which represents a force in a physical interpretation) is the direction of a vector gradient of the scalar potential.
  • V 2 0 + p 0 Equation 1 (Poisson equation) where 0 represents the potential field and p is some abstract, spatially varying energizing matter.
  • the 'darkness' of a pixel is considered as the strength of the
  • solution of Equation 1 in the spatial domain of the scan area of the ultrasound image may simultaneously identify an entire set of candidate seed point locations.
  • solutions of Equation 1 are analysed to determine possible locations of centres of dark blobs in the ultrasound image, noting that the potential field will have a minimum at the centre of anechoic regions.
  • the spatial domain over which the Poisson equation is solved is the active scan area of an image frame, using the image pixel coordinates as the x, y Cartesian coordinate system for the calculations
  • FIG. 5 shows an example image frame 450 of a raster image including an active scan area 500, shown here as a fan-shaped region.
  • the illustrated active scan area 500 includes a perimeter 502 including boundary pixels defining a boundary of the active scan area 500, and interior active pixels (inner pixels) 504, shown here as grey pixels.
  • each active pixel is either a boundary pixel or an inner pixel.
  • white pixels located in the region 506 outside of the boundary are inactive pixels and are thus not part of the spatial domain of the calculation.
  • rectangular perimeter 508 represents the boundary of the image frame 450.
  • Equation 1 • a means of converting Equation 1 from the mathematically continuous form into an approximate discrete form.
  • the x; y directions will be used as reference axes in discretizing Equation 1 , but are otherwise not used directly.
  • An alternate approach of referencing the 2D space of the image pixels is using row, column indexing using integer coordinates i for the row and column respectively.
  • the row index i corresponds to the Cartesian y-direction while the row index j corresponds to the Cartesian x-direction, so that i,j corresponds with y, x, not x, v.
  • N the number of active pixels.
  • a geometric region inside a pixel with which a discrete unknown is identified will referred to as a "cell”. Accordingly, the term "cell” will be used throughout this specification when the context is conceptually distinct from a pixel (even though they correspond in actuality).
  • an ordering convention for the indexing of the unknowns 0 n is defined.
  • the cells are numbered row-wise from left to right, followed by column-wise top to bottom.
  • One example of an ordering convention 600 of this type is illustrated in Figure 6 with the corresponding mapping 602 in table form.
  • cell indexing n starts from 1
  • the type code b shadow represents a boundary cell type (such as cell number 1 1)
  • z cache represents an inner cell type (such as cell number 14).
  • inactive cells that is, cells associated with inactive pixels
  • the cell type that is, boundary, b su, or inner, izie
  • the row and column indexes As well as a mapping from index to cell data, n ⁇ ⁇ type, i, /), there is also a reverse lookup from row and column indexes to cell index, in other words, ⁇ n.
  • the depicted example provides a convenient way of looping over the active cells and addressing data from the source image pixels when required, such as when mapping image pixel data to a cell.
  • the image pixels which form a regular Cartesian grid of cells, form a coordinate system for the scalar potential field values 0 classroom, which are mapped from the cells via Equation 2.
  • the cell-based field values 0i are defined by nominating a means of approximating Equation 1 in a discrete approximate form. This approach amounts to selecting a discretization scheme for derivatives based on the fundamental definition of the derivative. In an embodiment, a second order central difference scheme is chosen, though other schemes are also possible.
  • Equation 4 Equation 4
  • Figure 7 shows the structure 700 of the above discretization stencil at a specific location i, j overlaid with a section of the active cell region of Figure 6, showing the relationship between the row and column indexes i,j and the cell array.
  • the term/?,y on the right hand side of the discretization stencil may be referred to as a source term, as it represents the spatial source of the field we are computing.
  • the sign of the source tenn is not important, as flipping the sign simply flips the field solution and does not change the location of extrema in the scalar potential. Therefore the sign will be dropped with the implicit understanding that the sign should be determined such that regional minima of the scalar potential correspond to anechoic regions in the original image.
  • the value of the source term p at each cell is obtained from the corresponding pixel value of the pre-processed image frame at that cell location as described above.
  • the pre-processing applied to each frame filters the image through a pipeline of processing steps.
  • the image frame processing can be represented by a function P which operates on the set of active pixels p .
  • the source term for inner cells is therefore:
  • Equations A and Equation B constitute a large linear system of equations in the unknowns 0; for i, j covering all the active cells.
  • solving this linear system involves collecting all the unknowns 0 n ⁇ - 0i and source terms p paddle - p and ordering them according to the selected linear indexing convention to form a column vector of unknowns:
  • P f e represents the corresponding source values at inner cells.
  • A. ⁇ r
  • A is an Nx N square matrix containing the constant coefficients of the corresponding unknowns from Equations A and B. [0101 ]
  • a given row with row index n of matrix A describes how the unknown 0 n is coupled to neighbouring unknowns:
  • the diagonal position contains -4 with 1 on either side of it (representing coupling to the cells to the left and right of it), plus two extra 1 values offset to the left and the right (representing coupling between rows).
  • the offset An + from the diagonal to these distant 1 values is equal to the difference in n index from one cell to the one above it in the 2D array, i.e.:
  • ⁇ n «, ⁇ I ., - tijj
  • N is typically large, N> 10 3 , this matrix is very sparse since there are on average less than 5 non-zero entries per row of the matrix and each row has N columns. A space-efficient matrix storage format should therefore be used.
  • the factorization of A is the dominant cost in the solution of the linear system, by at least 2 orders of magnitude. Since A depends only on the geometry of the active pixels, and such geometries are known in advance, the factorization need only be done once. In the context of real-time seed point selection, if A is not yet known then factorization need only happen once at the commencement of each scan. The solving using back substitution can then be performed once per scan frame in a matter of tens of milliseconds.
  • the LU data may be precomputed by a "Matrix LU Factorizer" module 416. Such a module could precompute the factorization at the start of a scan, or alternately precompute it offline and store it in a database on non-volatile storage, shown as 'LU Factorization Data' in the figure.
  • Precomputing the LU data offline and storing it is preferable if the loading time from said storage is less than the time to perform LU factorization on demand.
  • the field 418 is then analysed by the seed point processing function 401 to locate candidate seed points.
  • locations in the scan area of local minima or maxima of the scalar potential field are determined to be candidate seed points, except locations directly on the boundary of the scan area.
  • dark "blobs" in the image correspond to minima or maxima in the scalar potential field depends only on the sign of the pixel data as fed into the differential equation.
  • Figure 9 shows an example visualization where the scalar potential is visualized superimposed over the original ultrasound image frame.
  • three distinct regional minima 902, 904, 906 in the scalar potential 900 are visible.
  • the central point of each of these local minima will be considered as a seed point candidate.
  • the candidate seed points 422 are then individually subjected to "heuristic filtering" 424 by applying a finite set of rules based on knowledge and expected behaviour of the system to reject seed points which fail to meet certain criteria. Points which fail to meet certain criteria are removed from the set of candidate seed points 422.
  • a candidate seed point may be rejected outright if one or more of the following criteria are satisfied:
  • the seed point v-coordinate is too "deep” (for example, more than about 2/3rd way down the image from the top);
  • the seed point x-coordinate lies outside a central band of high likelihood in the image where all reasonable bladder centres are found to lie.
  • the vertical band is symmetric about the vertical centre line of the image and covers about l/3rd of the image area.
  • the mean pixel intensity in the neighbourhood of the seed point in the pre- processed image exceeds the anechoic threshold I a by some predetermined threshold. This may indicate that the image in the locality of the seed point is likely too echogenic to be considered a bladder region.
  • the predetermined threshold is 10% above the anechoic threshold.
  • the size of the local neighbourhood used for computing the mean should be tuned. In an embodiment, a neighbourhood size of 5 x 5 cells is used.
  • a numerical rank value at step 426 is computed based on a tuned sum of numerical metrics which each measure a quantitative property of the seed point itself and/or the ultrasound image at the location of that seed point.
  • the ranking metrics cover all the relevant factors that influence the quality of a seed point location and the likelihood that it corresponds to the centre of the bladder.
  • seed point candidates that are not rejected are ranked by a numerical value R.
  • the ranking may be composed of the weighted sum of multiple distinct factors F k , each of which has a distinct weighting coefficient w k :
  • the rank R is formulated so that the lower the value the better in which case the candidate seed point with the lowest overall rank will be selected as the "best" seed point.
  • weighting coefficients The purpose of the weighting coefficients is to tune the relative influence of the numerical factors F k on the final outcome, such that the best seed point is nominated as often as possible in the judgement of a human operator.
  • the various factors F k in general each have different units, so the weights play a role in scaling each unit to a comparable numerical range. Note that the absolute value of the weights w k is not important, only their values relative to one another and the units they scale.
  • the proximity to the last known seed point introduces a degree of cross-frame coherence between consecutive frames which helps stabilise the seed point location by penalising points which are inconsistent across frames.
  • the set of seed points 422 is sorted by rank value at step 428.
  • the most favourable seed point (which may be the one with the lowest or highest rank value, depending how it is formulated) is nominated as the "best seed point" 306 and output to the segmenter 236 (ref. Figure 3).
  • the single seed point with the lowest overall rank value is chosen as the best seed point for that frame and passed to the image segmentation algorithm. In the case that all seed points are rejected for a frame, no segmentation is performed that frame, though the record of the last known good seed point is preserved for use in future frames
  • the algorithm described here the anechoic regions represent an attractive "pulling" force.
  • the potential field computed at each point effectively represents how much "energy” it would take to get from that point to the edge of the scan area against that attractive force. Consequently, the scalar potential field will have a minimum at the centre of anechoic regions.
  • Image frames are processed using non-linear filters and used as the "energizing matter" in the field calculation, which simultaneously identifies a family of possible seed point candidates. Such points are filtered and ranked, and the best seed point after ranking (if there is one) is used as the seed point for image segmentation.
  • the algorithm may be implemented efficiently for real-time operation by selection of suitable data structures and data management strategies on low powered, handheld computing device.
  • the invention is implemented primarily using computer software, in other embodiments the invention may be implemented primarily in hardware using, for example, hardware components such as "an application specific integrated circuit (ASICs). In other embodiments, the invention may be implemented using a combination of both hardware and software.
  • ASICs application specific integrated circuit

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de traitement d'images ultrasonores pour sélectionner un point-graine. Le procédé consiste à : recevoir une trame d'image comprenant une zone active comprenant une pluralité de pixels, chaque pixel ayant une valeur d'intensité respective ; régler la valeur d'intensité pour chaque pixel en fonction d'une classification de la valeur d'intensité respective pour générer une trame d'image codée ; générer un modèle de champ de potentiel scalaire bidimensionnel (2D) de la trame d'image codée sur la base de distributions des valeurs d'intensité réglées ; et analyser le modèle de champ de potentiel scalaire 2D pour localiser un ou plusieurs points-graines candidats pour une sélection.
PCT/AU2015/000591 2014-09-29 2015-09-29 Système et procédé de traitement d'image ultrasonore WO2016049681A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2014903919A AU2014903919A0 (en) 2014-09-29 Ultrasound image processing system and method
AU2014903919 2014-09-29

Publications (1)

Publication Number Publication Date
WO2016049681A1 true WO2016049681A1 (fr) 2016-04-07

Family

ID=55629143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2015/000591 WO2016049681A1 (fr) 2014-09-29 2015-09-29 Système et procédé de traitement d'image ultrasonore

Country Status (1)

Country Link
WO (1) WO2016049681A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019042962A1 (fr) * 2017-09-01 2019-03-07 Koninklijke Philips N.V. Localisation de structures anatomiques dans des images médicales
CN113768533A (zh) * 2020-06-10 2021-12-10 无锡祥生医疗科技股份有限公司 超声显影装置和超声显影方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1030187A2 (fr) * 1999-02-19 2000-08-23 The John P. Robarts Research Institute Méthode automatisée de segmentation pour l'imagerie ultrasonique en trois dimensions
US20050027188A1 (en) * 2002-12-13 2005-02-03 Metaxas Dimitris N. Method and apparatus for automatically detecting breast lesions and tumors in images
US20080139938A1 (en) * 2002-06-07 2008-06-12 Fuxing Yang System and method to identify and measure organ wall boundaries
US20080260229A1 (en) * 2006-05-25 2008-10-23 Adi Mashiach System and method for segmenting structures in a series of images using non-iodine based contrast material
WO2010066007A1 (fr) * 2008-12-12 2010-06-17 Signostics Limited Méthode de diagnostic médical et appareil afférent
US20120053467A1 (en) * 2010-08-27 2012-03-01 Signostics Limited Method and apparatus for volume determination
WO2012140147A2 (fr) * 2011-04-12 2012-10-18 Dublin City University Traitement d'images échographiques

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1030187A2 (fr) * 1999-02-19 2000-08-23 The John P. Robarts Research Institute Méthode automatisée de segmentation pour l'imagerie ultrasonique en trois dimensions
US20080139938A1 (en) * 2002-06-07 2008-06-12 Fuxing Yang System and method to identify and measure organ wall boundaries
US20050027188A1 (en) * 2002-12-13 2005-02-03 Metaxas Dimitris N. Method and apparatus for automatically detecting breast lesions and tumors in images
US20080260229A1 (en) * 2006-05-25 2008-10-23 Adi Mashiach System and method for segmenting structures in a series of images using non-iodine based contrast material
WO2010066007A1 (fr) * 2008-12-12 2010-06-17 Signostics Limited Méthode de diagnostic médical et appareil afférent
US20120053467A1 (en) * 2010-08-27 2012-03-01 Signostics Limited Method and apparatus for volume determination
WO2012140147A2 (fr) * 2011-04-12 2012-10-18 Dublin City University Traitement d'images échographiques

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019042962A1 (fr) * 2017-09-01 2019-03-07 Koninklijke Philips N.V. Localisation de structures anatomiques dans des images médicales
CN113768533A (zh) * 2020-06-10 2021-12-10 无锡祥生医疗科技股份有限公司 超声显影装置和超声显影方法
CN113768533B (zh) * 2020-06-10 2024-05-14 无锡祥生医疗科技股份有限公司 超声显影装置和超声显影方法

Similar Documents

Publication Publication Date Title
US11354791B2 (en) Methods and system for transforming medical images into different styled images with deep neural networks
CN106204465B (zh) 基于知识的超声图像增强
CN110325119B (zh) 卵巢卵泡计数和大小确定
KR101121286B1 (ko) 센서의 교정을 수행하는 초음파 시스템 및 방법
KR101565311B1 (ko) 3 차원 심초음파 검사 데이터로부터 평면들의 자동 검출
US6312385B1 (en) Method and apparatus for automatic detection and sizing of cystic objects
KR101121353B1 (ko) 2차원 초음파 영상에 대응하는 2차원 ct 영상을 제공하는 시스템 및 방법
CN110945560B (zh) 胎儿超声图像处理
US8343053B2 (en) Detection of structure in ultrasound M-mode imaging
KR101121396B1 (ko) 2차원 초음파 영상에 대응하는 2차원 ct 영상을 제공하는 시스템 및 방법
US20120154400A1 (en) Method of reducing noise in a volume-rendered image
KR20190061041A (ko) 이미지 프로세싱
CN111345847B (zh) 基于组织密度管理波束成形参数的方法和系统
US11712224B2 (en) Method and systems for context awareness enabled ultrasound scanning
CN111683600A (zh) 用于根据超声图像获得解剖测量的设备和方法
CN111820948B (zh) 胎儿生长参数测量方法、系统及超声设备
US20210100530A1 (en) Methods and systems for diagnosing tendon damage via ultrasound imaging
WO2016049681A1 (fr) Système et procédé de traitement d'image ultrasonore
CN117224159A (zh) 超声波诊断装置及超声波诊断装置的控制方法
US20240212132A1 (en) Predicting a likelihood that an individual has one or more lesions
US11506771B2 (en) System and methods for flash suppression in ultrasound imaging
US11810294B2 (en) Ultrasound imaging system and method for detecting acoustic shadowing
US20230186477A1 (en) System and methods for segmenting images
EP3848892A1 (fr) Générer une pluralité de résultats de segmentation d'images pour chaque noeud d'un modèle de structure anatomique afin de fournir une valeur de confiance de segmentation pour chaque noeud
CN117653206A (zh) 羊水深度测量方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15847346

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15847346

Country of ref document: EP

Kind code of ref document: A1