WO2016141449A1 - Computer-assisted focused assessment with sonography in trauma - Google Patents

Computer-assisted focused assessment with sonography in trauma Download PDF

Info

Publication number
WO2016141449A1
WO2016141449A1 PCT/CA2015/050179 CA2015050179W WO2016141449A1 WO 2016141449 A1 WO2016141449 A1 WO 2016141449A1 CA 2015050179 W CA2015050179 W CA 2015050179W WO 2016141449 A1 WO2016141449 A1 WO 2016141449A1
Authority
WO
WIPO (PCT)
Prior art keywords
organ
images
ultrasound
image
probe
Prior art date
Application number
PCT/CA2015/050179
Other languages
French (fr)
Inventor
Stergios Stergiopoulos
Konstantinos Plataniotis
Mahdi Marsousi
Original Assignee
Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of The Department Of National Defence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of The Department Of National Defence filed Critical Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of The Department Of National Defence
Priority to PCT/CA2015/050179 priority Critical patent/WO2016141449A1/en
Publication of WO2016141449A1 publication Critical patent/WO2016141449A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20124Active shape model [ASM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20128Atlas-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20161Level set
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Definitions

  • the present invention relates to the field of
  • this invention relates to computer-assisted probe placement which facilitates Focused Assessment with Sonography in Trauma (FAST) examination by non-trained operators.
  • FAST Focused Assessment with Sonography in Trauma
  • trauma refers to an internal free-fluid
  • Trauma may be detected by way of computed tomography
  • CT magnetic resonance imaging
  • MRI magnetic resonance imaging
  • ultrasound imaging Trauma detection by ultrasound is
  • 3-D ultrasound provides the location of the ultrasound signal in space along with a volumetric representation of internal organs. Accordingly, 3-D ultrasound images are also sometimes referred to as ultrasound volumes.
  • FAST Focused Assessment with Sonography in Trauma
  • FAST reguires a trained FAST examiner, such as a radiologist, to control and move an ultrasound probe around a patient's body.
  • a FAST examiner referred to in this document as an operator, views images while moving an ultrasound probe. In doing so, the operator searches the images for darkened regions representing low-echoic, blood-filled areas. When a dark region is located, the operator uses his or her technical expertise and knowledge to determine whether the detected dark area represents trauma, a fluid-carrying organ, or a shadow.
  • Morison's pouch visualizing the hepatorenal recess of subhepatic space, commonly referred to as Morison's pouch. Morison's pouch view also precludes possible
  • a Morison's pouch view is optimally acguired by
  • the ultrasound probe is placed at the intersection of Horizontal Subxiphoid (HS) line and the right mid- auxiliary line, with the probe marker directed toward the head.
  • the HS line is an imaginary horizontal line connecting the xiphoid process to the mid-auxiliary line .
  • Morison's pouch fills with blood. In FAST examination, this appears as a dark (low-echoic) region between the kidney and the liver.
  • An expert with knowledge of human anatomy and ultrasound imaging would be able to position the ultrasound properly on the trauma patient's body and determine the presence of trauma.
  • An examiner lacking such skill and knowledge, however, would be unlikely to properly place the probe and segment the kidney, leading to potential misdiagnosis and loss of life. Therefore, the current state of the art reguires that a FAST examiner have expertise in human anatomy and ultrasound imaging.
  • kidneys for trauma detection.
  • such methods reguire operator intervention to determine the presence of trauma and to obtain the valuable 3-D ultrasound images that indicate the precise location of the trauma on the patient in real-time.
  • These methods therefore, reguire operator expertise and may be time consuming.
  • the kidney has a unique appearance structure in 3D ultrasound volumes, such that its distinct shape distinguishes it from all other internal organs. Since the right kidney is visible in the Morison's pouch view, kidney
  • segmentation can be used as a cornerstone of a ultrasound automated diagnosis system.
  • applying a robust and accurate kidney detection is crucial to eliminating the need for human intervention to manually initialize the kidney segmentation process
  • Prevost et al proposed another method to aid operators in ultrasound kidney segmentation in a report entitled "Kidney detection and real-time segmentation in 3d contrast-enhanced images.” at the 9 th IEEE International Symposium on Biomedical Imaging (ISBI) .
  • input 3-D ultrasound images are enhanced to show the kidney as brighter than other regions.
  • Prevost et al then apply a 3-D deformable model to these images.
  • the operator is required to manually select landmarks inside and/or outside the kidneys . Their method then directly imposes the selected landmarks on the energy
  • the present invention provides systems, methods, and devices relating to the detection and characterization of a patient's kidney using 3-D ultrasound.
  • An operator is provided with instructions for proper placement of a probe on a patient.
  • the system determines if the probe is properly placed, otherwise, the operator is continuously prompted to properly place the probe.
  • Ultrasound images are obtained using the probe and the patient's kidney is detected in the images using an organ atlas database. Once the kidney is detected, if the probe is misaligned, corrective instructions for the proper alignment of the probe are sent to the operator.
  • the present invention provides a computer assisted probe placement that facilitates the focused
  • the present invention provides an automated ultrasound probe placement for FAST examination.
  • the present invention facilitates FAST examination by non-trained operators in emergency situations.
  • the present invention provides a method for detecting a presence of trauma in a patient, the method comprising: a) providing an operator with an indication of a location on said patient for an initial placement of a probe; b) obtaining images of an internal area of said patient using said probe; c) determining if at least one specific organ is present in images obtained in step b) ; d) in the event said at least one specific organ is not present in said images, repeating steps a) - c) until said at least one specific organ is present in said images; e) determining an alignment of said at least one specific organ in said images; f) determining if a desired view of said internal area is found in said images; g) in the event said desired view is not found in said images, relating said alignment of said at least one specific organ to an alignment of said probe; h) determining corrective instructions for probe placement based on results of step g) , said corrective instructions being operative to adjust a view of said internal area
  • the present invention provides a method for compiling a database for use in detecting a presence of a specific organ in an ultrasound image, the method comprising: a) pre-processing at least one training image; b) manually segmenting an organ in said at least one training image; c) generating an average shape model for said organ based on said at least one training image and said organ segmented in step b) ; d) extracting texture features from said organ segmented in step b) ; e) training kernel based support vector machines to classify portions of images as being organ or non-organ based on said at least one training image; f) storing said kernel based support vector machines and said organ average shape model in said database.
  • the present invention provides a system for detecting an organ in at least one
  • ultrasound image of an internal area of a patient comprising:
  • - data storage containing a database, said database including :
  • said support vector machine being for classifying portions of said at least one image as being organ or non-organ based on said at least one training image
  • Figure 1 shows the overall block diagram of the automated ultrasound probe navigation system
  • Figure 2 shows a graphical representation of the initial probe placement
  • Figure 3 schematically illustrates the steps for the first stage of generating a kidney atlas database, including selecting training ultrasound volumes, selecting a reference volume, registering the training ultrasound volumes on the reference volume, and manually segmenting the registered volumes;
  • Figure 4 is a block diagram for the second stage of generating a kidney atlas database including
  • Figure 5 schematically illustrates the steps in the third stage of generating a kidney atlas database of the present invention, including training spatially distributed Kernel-based Support Vector Machines (KSVM) ;
  • KSVM Kernel-based Support Vector Machines
  • Figure 6 schematically details steps in kidney segmentation of input ultrasound volumes
  • Figure 7 is a flow chart detailing the steps in an automated kidney segmentation process by generation of a binarized mask specifying voxels with a higher probability of being kidney tissue;
  • Figure 8 shows the graphical user interface of the automated ultrasound probe navigation;
  • Figure 9 shows the relation of the detected kidney orientation and ultrasound probe rotation.
  • the present invention involves an organ atlas database, an automated kidney segmentation process, an ultrasound probe navigation system, and an automated trauma diagnosis system to help guide a FAST examiner, or an untrained operator, to detect trauma.
  • Input 3D ultrasound images are automatically compared with the 3D images from an organ atlas database using real-time image processing misalignment calculations to obtain a correct view.
  • Automated organ segmentation using the organ atlas database is then performed and probe navigational commands are generated and sent to an operator to find the correct Morison's pouch view. After verifying that a correct view has been obtained, trauma is diagnosed using the segmented kidney.
  • the organ of interest will be referred to as a kidney and a desired view will be referred to as a Morison's pouch view.
  • a kidney a desired view
  • the present invention may be applied to any other organ and to any desirable view of any anatomical area of a mammal.
  • the ultrasound image gathered should include a view of the kidney.
  • a partial or full image of the kidney should be present and this can be detected by the system. Detection of at least a portion of the kidney is possible as the kidney's unigue shape and positioning can be compared to predetermined reference images of the kidney stored in the database. If any portion of the acguired image is not found to contain at least a section of a kidney that conforms to the expected shape and location of the kidney, then the operator must have placed the probe at an incorrect location. The operator is thus prompted to start the process anew.
  • the placement and orientation of the kidney is compared to its expected location and orientation as determined by the training images in the database.
  • FIG. 1 is a flowchart detailing the steps involved in a method according to one aspect of the invention.
  • the operator is provided with automated guidance for a placement of the ultrasound probe (step 10) .
  • the relevant ultrasound volumes are then gathered using the probe (step 20) .
  • a kidney detection is then performed (Step 30) .
  • Decision 40 determines if the kidney has been detected or if the kidney exists in the ultrasound volumes. If a kidney has not been detected in the ultrasound images, the operator is again guided to a proper placement of the ultrasound probe.
  • step 50 determines if the kidney has been detected.
  • step 50 determines if a proper Morison's pouch view has been achieved.
  • the probe's misalignment is determined based on a misalignment of the kidney from the ultrasound images (step 70) .
  • a corrective probe alignment command is then sent to the operator (step 80) and the method loops back to that of acguiring ultrasound images (step 20).
  • step 60 if the proper Morison's pouch view has been achieved, then the kidney in the ultrasound images is segmented (step 90) . This data is then used for an automated trauma diagnosis (step 100) .
  • the invention provides an operator with instructions or indications for a proper placement of the ultrasound probe.
  • Ultrasound images are then gathered using the probe. Whether the probe is properly placed or not is automatically determined by analyzing the gathered images using the kidney as a reference point. If the kidney or a portion of the kidney is not detected from the ultrasound images, the operator is again prompted with instructions for a proper initial placement of the probe. If at least a portion of the kidney is detected, then calculations are performed to determine any misalignment of the kidney in the images. Any misalignment of the kidney is related back to a misalignment of the probe placement. Corrective instructions for the probe placement are then automatically provided to the operator. The process repeats until the images indicate a correct view of the area around the kidney. Using this correct view of the area, the kidney is segmented and the images are then used for trauma diagnosis .
  • Figure 2 is a sample graphical user interface (GUI) which may be used with the invention.
  • GUI graphical user interface
  • the GUI presents the operator with a front image of a human body and an indication of where the ultrasound probe is to be placed on a patient's body for a proper initial placement.
  • the operator is guided to place the ultrasound probe at the intersection of the HS and mid-auxiliary lines to ensure that the right kidney is at least partially present and is in the correct orientation.
  • Figure 2 or a variant thereof may be used, other implementations are also possible.
  • the indication as to where to place the probe on the patient may actually be projected on the actual patient.
  • automated, pre-recorded auditory instructions, predetermined written instructions (with or without accompanying diagrams), and any other visual or auditory means to guide the operator may also be used.
  • these initial instructions are provided to the operator so that the operator is enabled to locate a proper initial probe placement .
  • the automated navigation system then processes the acguired 3-D ultrasound images, also referred to as ultrasound volumes, to detect the kidney.
  • the position and orientation of the detected kidney are used to calculate potential ultrasound probe misalignment.
  • corrective instructions for the operator are calculated.
  • the navigation system's instructions guide the operator to move or rotate the ultrasound probe toward the correct placement .
  • the corrective instructions to the operator can take multiple forms .
  • the operator can be guided by a projection of the calculated misalignment onto the region where the ultrasound probe is placed.
  • the operator can be guided by an auditory navigation command sent to the operator.
  • These corrective navigation commands can also be displayed on a GUI or projected on the patient.
  • the corrective instructions can take the form of text or symbols presented to the operator using any suitable means.
  • Ultrasound probe misalignments are determined by the navigation system using an organ atlas database.
  • the atlas database is generated in advance using manually segmented organs in a training set of 3-D ultrasound images .
  • the organ atlas database stores information required for segmenting an organ of interest in input ultrasound volumes. Such information may include: the reference volume, segmented organs in the training set of ultrasound images, the organ average shape model, and trained spatially distributed kernel based support vector machines (KSVMs) .
  • KSVMs trained spatially distributed kernel based support vector machines
  • the organ atlas database is used to determine if a potential organ of interest in the input image (i.e. the image obtained with the probe) is a kidney and if the kidney is in a proper alignment in the image. It should be noted that, in one implementation of the invention, the organ atlas database is part of the navigation system. The database is generated separately and is used whenever the system is used.
  • the main step is that of generating an organ average shape model.
  • the sub-step is that of storing the segmentations as binarized masks in the atlas database.
  • texture features This is accomplished by: i) Extracting features as 3-D volumes from a
  • KSVMs Kernel-based Support Vector Machines
  • the process begins with the selection of a training set of 3-D ultrasound images (step 200).
  • a graphical user interface (GUI) designed to allow a user to select and load ultrasound volumes from the computer storage is used in one
  • speckle noise of each training ultrasound volume is reduced using an anisotropic diffusion filter (step 210).
  • Partial differential eguations are employed to spatially control smoothing power based on the distance of voxels to object edges. Voxels located away from object edges are more highly smoothed. Voxels close to object edges are only smoothed in the direction parallel to the object edge.
  • the anisotropic diffusion filter operates based on the following PDE formula,
  • Eguation (1) ⁇ j ⁇ div[c(q(x,y,z,-t)) ⁇ VV ds (x,y,z,-t)], where V and div are gradient and divergence operators, respectively.
  • C(.) is the diffusion coefficient
  • q (x, y, z; t) is the instantaneous coefficient of
  • c(q(x,y,z; t)) is defined using the following eguations :
  • step 220 the intensity inhomogeneity of the input ultrasound volume is removed. This may also be referred to as "bias correction".
  • the intensity inhomogeneity is modeled as a multiplicative bias field,
  • V ds (x,y,z) V bc (x,y,z)f(x,y,z) r
  • f(x,y,z) is a bias field
  • V bc (x,y,z) is the bias corrected volume
  • Pv ds r Pv bc r and Pf be the probability density functions of V ds , V bc , and /, respectively.
  • the probability density functions are related as, Equation
  • Pv ds is calculated at each voxel based on the histogram of a sub-volume centered at the voxel.
  • Pv bc is unknown, and Pf is simplified to be a Gaussian distribution with zero mean and an unknown variance.
  • V bc (x,y,z) is
  • V bc (x, y, z) exp (v bc (x, y, z) is obtained.
  • a reference volume is selected (step 230 ) .
  • a suitable user interface may be used.
  • the best quality ultrasound volume is selected as the reference volume.
  • a GUI designed to allow the operator to manually select landmarks can be used.
  • Such a GUI may provide the ability to select pair-voxels on the reference volume, e f i an each training ultrasound volume, X ⁇ rc .
  • an affine transformation is fitted on the selected landmarks, with an aim to register the training ultrasound volume on the reference volume.
  • the registered volume is V r reg ⁇
  • the affine transformation matrix is defined as follows,
  • R x , R y , and R z are rotation matrices of the x-, y- and z- axes, respectively.
  • Matrix A is defined to minimize the following error
  • the organs such as
  • kidneys, in the registered training volumes are manually segmented.
  • the results are stored as binarized masks, B n where n e ⁇ 1, ... ,N Training ⁇ .
  • the image consists of Is inside the organ and 0s outside of the organ as shown in one section of Figure 4.
  • These results are stored in the organ atlas database (step 260).
  • kidney average shape model is generated to initialize kidney segmentation ( Figure 4) . This may result in a faster and more accurate segmentation.
  • the organ's average shape model is generated using the manual segmentations that are already aligned, as described above and illustrated in Figure 3 . As such, the voxel-wise average is
  • KASM is the kidney average shape model. KASM is then saved in the kidney atlas database ( Figure 4) .
  • texture features from the registered ultrasound volumes can be extracted.
  • the texture features are joined with the manual segmentations and these features are then used to train KSVM classifiers to automatically segment the kidney portion of an input ultrasound volume.
  • 3-D Gabor wavelets sinusoidal waves modulated by 3-D Gaussian functions, may be used to extract texture features from ultrasound volumes as follows,
  • S and / are a normalization scale and the amplitude of the complex sinusoids, respectively.
  • ⁇ ⁇ , Oy , and ⁇ ⁇ are the Gaussian envelop widths in the x-,y- , and z- axes, respectively.
  • R is the 3-D rotation matrix with 3 parameters: ⁇ ⁇ , 9 y , and ⁇ ⁇ .
  • An y number can be selected to define o,9 x ,9 y , or ⁇ ⁇ . For example, if ⁇ ⁇ , 9 y , ⁇ ⁇ e ⁇ 0,45,90,135 ⁇ and
  • Texture information of a given organ such as a
  • texture information may be locally classified by dividing the entire volume into a set of sub-volumes .
  • each volumetric feature, F ⁇ may be divided into
  • K sub-volumes K' k where ke ⁇ l,...,K ⁇ . Furthermore, the sub-volumes may be divided unevenly, or evenly such as with 50 percent overlapping. For each pair of sub- volumes k and features I, a sub-set of N Tra£n£n5 -related sub-volumes, ⁇ F ; 1,fe , ma y collected.
  • FIG. 5 schematically shows the remaining stages of generating the kidney atlas database.
  • the various stages are applied in parallel to multiple datasets once the relevant texture features have been extracted using the Gabor wavelets .
  • the sub-volumes are extracted and then the related sub-volumes in the sub-sets are converted into vectors in a process called vectorization .
  • vectorization vectors in each sub-set are vertically concatenated to create a single vector for each sub-set, F k . Then, vectors related to each sub-volume are horizontally
  • KSVM classifier may then be used to classify voxels into kidney and non-kidney tissues. Rather than minimize the classification error, KSVM provides a maximum margin distance between two classes to obtain a maximum generalization ability. In addition, using a kernel-based operation, KSVM is able to solve nonlinear classification problems by mapping samples into a higher dimensional space.
  • Gaussian kernel function may be selected for the KSVMs .
  • the manual segmentations may be used to specify classes of each row in each sub-volume feature matrix, M k .
  • sub-volumes, B l,k are extracted from each B l where i e [1, ... , N Training ⁇ and k e ⁇ 1, ... , K] .
  • Each sub-volume, B l,k is then vectorized and sub-volumes related to the same k are concatenated, creating B k for all k e ⁇ l, ... , K ⁇ .
  • M k and B k may both be used to train KSVM k .
  • the trained KSVM classifiers
  • ⁇ KSVM t , ... , KSVM K ] may then be stored in the atlas database .
  • the above described atlas database is applied with the automated trauma diagnosis system to segment the organ of interest and diagnose trauma (Figure 7) .
  • the segmenting of organs in the input 3-D ultrasound images involves a number of steps, as follows: i. Reducing speckle noise from the training set of 3-D ultrasound volumes and removing intensity inhomogeneity field using a bias correction approach;
  • KSVM trained KSVM in the atlas database to classify voxels into kidney (ones) and non-kidney (zeros) voxels;
  • the ultrasound volumes, V in are enhanced by reducing speckle noise. This may be effectuated by an anisotropic diffusion filter. Bias correction is then performed to remove intensity inhomogeneity . The enhanced volume is noted by .
  • the misalignment of a kidney in an input ultrasound volume with respect to the kidney in the reference volume is removed. This may be performed by applying an automated rigid-body registration. Since the trained classifiers are spatially distributed, a misalignment in the kidney position results in failing to correctly detect voxels pertinent to the kidney tissue. For this purpose, a landmark-based
  • kidney segmentation should preferably be automated.
  • an optimal direction strategy must be applied to modify the rigid-body registration parameters .
  • the rigid-body registration uses an affine
  • the affine transformation may include 7 parameters such as 3 translations, 3 rotations, and 1 scaling.
  • the parameters vector, 9 lter at each iteration would then consist of ⁇ ⁇ e ⁇ t x , t y , t z , ⁇ ⁇ , 9 y , ⁇ ⁇ , s ⁇ where ne ⁇ l,...,7 ⁇ .
  • each parameter, ⁇ ⁇ is modified as follows,
  • 3-D Gabor wavelets may then be used to extract features from V R L ⁇ GR for example ⁇ F 3 , ... , F ⁇ Q ] ⁇
  • Sub-volumes may then be extracted from the extracted features, for example, [f ( re5 ' fe ] where k e ⁇ l, ... , K] and I e ⁇ 1, ...,128 ⁇ .
  • the extracted sub-volumes may then be vectorized and concatenated for each sub-volume to generate the features matrix, M re9,k .
  • voxels in each sub-volume are classified into kidney and non-kidney tissues using the related trained spatially distributed classifier, KSVM k .
  • the feature matrix M rea,k is the input to KSVM k .
  • the output is a classification result as a vector of 'zero's and 'one's.
  • the vector is reshaped to form a 3-D binarized sub-volume. All binarized sub-volumes may then be combined to generate the 3-D binarized mask, B rea , as shown in Figure 6 .
  • ⁇ P(x, y, z; t) > 0 specifies voxels inside the segmentation region
  • 0(x, y, z; t) ⁇ 0 specifies voxels outside the segmentation region.
  • an d ®reg is the calculated misalignment of the kidney image with respect to the reference kidney shape, 9 m i sa n gnment 9 reg + Q reg .
  • the affine deformation is a global deformable model used to align the deformable model on the organ of interest such as a kidney (highlighted in the binarized mask). Region-based level-set propagation must then be applied as a local deformable model to finely segment the kidney.
  • the 3-D image domain to be ⁇
  • average intensity levels of kidney to be c lr and non-kidney regions to be c 0 all calculated based on V r l g and B re9 .
  • V is the gradient operation
  • ⁇ and H are Dirac delta and Heaviside functions, respectively
  • ⁇ , v, ⁇ and ⁇ 2 are regulation parameters.
  • the Euler-Lagrange eguation may be used to minimize J
  • FIG. 8 schematically illustrates the navigation window that provides the operator with instructions in one embodiment of the invention.
  • the probe is provided with a marker to indicate a specific probe orientation (e.g. a "top" for an initial probe placement) .
  • the calculated registration parameter ⁇ is related to the probe misalignment on the patient body.
  • the automated probe navigation system of the present invention may begin by asking the operator to compensate for improper orientation of the ultrasound probe from the initial placement to remove ⁇ ⁇ by rotating the ultrasound probe.
  • the coordinate system used for the following explanation is illustrated in Figure 9. After removing ⁇ ⁇ , corrective probe
  • translation commands can be sent to the operator.
  • t y represents the misalignment of the ultrasound probe in the direction of mid-auxiliary line toward cephalad
  • t z represents the misalignment perpendicular to the mid-auxiliary line.
  • Translation commands are iteratively sent to the operator until the desired view of Morison's pouch is attained.
  • the kidney image misalignment, e misalignment is obtained in two steps: (1) aligning the enhanced input 3D image on the reference 3D image and (2) aligning the binarized 3D image on the expected alignment of the kidney shape model in the generated atlas database.
  • Step (1) provides a rough alignment of the kidney shape image on the expected kidney alignment
  • step (2) performs a fine alignment of the input kidney image on to the reference kidney shape. It should be noted that step (1) is crucial for step (2) to correctly work. If step (1) is not properly done, the KSVM classifiers would not be able to properly generate the binarized volume.
  • kidney image misalignment always tracks on the calculated kidney image misalignment and stops probe navigation when the misalignment falls within a predetermined acceptable range.
  • ⁇ misalignment [tx> ty> tz> x> y> z> s ] r i- s recalculated after each time the operator moves and/or rotates the ultrasound probe. If the translation and orientation parameters, including t x , t y , t z , ⁇ ⁇ , 9 y and ⁇ ⁇ , are smaller than their related threshold values, the navigation system concludes that the probe is aligned within an acceptable range from the reference kidney alignment and the navigation process stops.
  • the system includes an ultrasound imaging sub-system which has a probe for use in gathering the ultrasound images.
  • the system includes at least one processor for processing the gathered images and for analyzing these images.
  • the system includes an organ atlas database that has reference images of the organ, trained classifiers which determine if portions of the image are organ tissue or not, as well as an average organ shape model. These reference images, classifiers, and average organ shape model are used by the system to detect the presence (partial or full) of the organ in the gathered images .
  • the database can be stored in any suitable data storage device or system such as a hard drive, solid state drive, or an online storage system .
  • Various aspects of the invention may be implemented as a built-in package in a 3D ultrasound machine or as an add-on package installed on a personal computer connected to a 3D ultrasound imaging device.
  • any 3D ultrasound machine which supports the ultrasound research interface (URI) or any other protocol exporting ultrasound raw data may be used with the invention.
  • suitable ultrasound imaging devices for use with the various aspects of the invention include the Siemens SONOLINE Antares ultrasound system and the Hitachi HiVision 5500.
  • the method steps of the invention may be embodied in sets of executable machine code stored in a variety of formats such as object code or source code. Such code is described generically herein as programming code, or a computer program for simplification. Clearly, the executable machine code may be integrated with the code of other programs, implemented as subroutines, by external program calls or by other technigues as known in the art.
  • the embodiments of the invention may be executed by a computer processor or similar device programmed in the manner of method steps, or may be executed by an electronic system which is provided with means for executing these steps .
  • an electronic memory means such computer diskettes, CD-ROMs, Random Access Memory (RAM) , Read Only Memory (ROM) or similar computer software storage media known in the art, may be programmed to execute such method steps.
  • electronic signals representing these method steps may also be transmitted via a communication network.
  • Embodiments of the invention may be implemented in any conventional computer programming language.
  • preferred embodiments may be implemented in a procedural programming language (e.g.”C") or an object oriented language (e.g. "C++”) ⁇
  • Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
  • Embodiments can be implemented as a computer program product for use with a computer system.
  • implementations may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD- ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium.
  • a computer readable medium e.g., a diskette, CD- ROM, ROM, or fixed disk
  • the medium may be either a tangible medium (e.g., optical or electrical
  • a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server over the network (e.g., the Internet or World Wide Web) .
  • a computer program product e.g., a computer program product
  • a server e.g., the Internet or World Wide Web
  • some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware.
  • Still other embodiments of the invention may be implemented as entirely hardware, or entirely software (e.g., a computer program product) .
  • Embodiments of the invention may also be implemented using sequential or parallelized programming methods.
  • Parallelized implementations of the methods and processes of the invention may be used with multi-core processors, such as Intel Xeon processors or Intel Extreme processors, or they can be implemented using GPU processors based on CUDA language.
  • the generated software package may be used either as a solution built into a 3D ultrasound imaging device or as an installable or executable addon software on a personal computer (including desktop workstation or laptop) connected to a 3D ultrasound imaging device supporting the ultrasound research interface or any other protocol to transfer ultrasound raw data.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Vascular Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

Systems, methods, and devices relating to the detection and characterization of a patient's kidney using 3-D ultrasound. An operator is provided with instructions for proper placement of a probe on a patient. The system determines if the probe is properly placed, otherwise, the operator is continuously prompted to properly place the probe. Ultrasound images are obtained using the probe and the patient's kidney is detected in the images using an organ atlas database. Once the kidney is detected, if the probe is misaligned, corrective instructions for the proper alignment of the probe are sent to the operator.

Description

COMPUTER-ASS ISTED FOCUSED ASSESSMENT WITH SONOGRAPHY IN TRAUMA FIELD OF THE INVENTION
[0001] The present invention relates to the field of
ultrasound imaging. More specifically, this invention relates to computer-assisted probe placement which facilitates Focused Assessment with Sonography in Trauma (FAST) examination by non-trained operators.
BACKGROUND
[0002] The term trauma refers to an internal free-fluid
caused by internal bleeding. Emergency surgical intervention is often reguired to save trauma patients' lives. The ability for non-skilled technicians to rapidly detect trauma is desirable and potentially life-saving.
[0003] Trauma may be detected by way of computed tomography
(CT), magnetic resonance imaging (MRI) or ultrasound imaging. Trauma detection by ultrasound is
particularly favorable when compared to CT and MRI as it is non-invasive, does not impose restriction of use on patients, is portable, and may provide real-time imaging .
[0004] Recent technological advancements in ultrasound
beamforming have resulted in rapid three-dimensional (3-D) ultrasound imaging. 3-D ultrasound provides the location of the ultrasound signal in space along with a volumetric representation of internal organs. Accordingly, 3-D ultrasound images are also sometimes referred to as ultrasound volumes.
[0005] Knowledgeable practitioners in the field of sonography often refer to trauma detection by ultrasound as Focused Assessment with Sonography in Trauma, or FAST. FAST reguires a trained FAST examiner, such as a radiologist, to control and move an ultrasound probe around a patient's body. A FAST examiner, referred to in this document as an operator, views images while moving an ultrasound probe. In doing so, the operator searches the images for darkened regions representing low-echoic, blood-filled areas. When a dark region is located, the operator uses his or her technical expertise and knowledge to determine whether the detected dark area represents trauma, a fluid-carrying organ, or a shadow.
^0006] Placement of an ultrasound probe which provides a view of the anatomical region known as Morison's pouch is known by those skilled in the art as being highly sensitive for trauma detection. This view is of particular interest as it shows both the right side of the liver and the right kidney, effectively
visualizing the hepatorenal recess of subhepatic space, commonly referred to as Morison's pouch. Morison's pouch view also precludes possible
misdiagnosis of trauma due to shadows cast by the presence of kidney stones.
^0007] A correct Morison's pouch view shows the kidney's
unigue and easily distinguishable structure in a particular location and orientation. As such, it is of vital importance that an operator properly segment, or identify, the kidney when obtaining a Morison's pouch view to provide an accurate trauma diagnosis. However, the inherent low guality of ultrasound images and the natural gaps along the kidney boundary may cause a technician to misalign the ultrasound probe leading to poor kidney detection and segmentation
(identification) .
[0008] A Morison's pouch view is optimally acguired by
placing the ultrasound probe at the intersection of Horizontal Subxiphoid (HS) line and the right mid- auxiliary line, with the probe marker directed toward the head. The HS line is an imaginary horizontal line connecting the xiphoid process to the mid-auxiliary line .
[0009] When a trauma patient is laid in a supine position,
Morison's pouch fills with blood. In FAST examination, this appears as a dark (low-echoic) region between the kidney and the liver. An expert with knowledge of human anatomy and ultrasound imaging would be able to position the ultrasound properly on the trauma patient's body and determine the presence of trauma. An examiner lacking such skill and knowledge, however, would be unlikely to properly place the probe and segment the kidney, leading to potential misdiagnosis and loss of life. Therefore, the current state of the art reguires that a FAST examiner have expertise in human anatomy and ultrasound imaging.
[0010] Methods and technigues have been developed to aid
operators in segmenting internal organs such as kidneys for trauma detection. However, such methods reguire operator intervention to determine the presence of trauma and to obtain the valuable 3-D ultrasound images that indicate the precise location of the trauma on the patient in real-time. These methods, therefore, reguire operator expertise and may be time consuming. [0011] Among all internal organs, the kidney has a unique appearance structure in 3D ultrasound volumes, such that its distinct shape distinguishes it from all other internal organs. Since the right kidney is visible in the Morison's pouch view, kidney
segmentation can be used as a cornerstone of a ultrasound automated diagnosis system. To provide an automated kidney segmentation, applying a robust and accurate kidney detection is crucial to eliminating the need for human intervention to manually initialize the kidney segmentation process
[0012] For example, in 2005, Fernandez and Lopez reported a method aiding operators in ultrasound kidney
segmentation (Medical Image Analysis, vol. 9, no. 1, pages 1-23) . Fernandez and Lopez's semi-automated approach requires the operator to manually outline kidney boundaries on individual input two-dimensional (2-D) ultrasound images using Markov random field and active contours. These manipulated images are then automatically reconstructed into a 3-D kidney shape. Such a method is time consuming and requires technical expertise on the part of the FAST examiner.
[0013] In 2012, Prevost et al . proposed another method to aid operators in ultrasound kidney segmentation in a report entitled "Kidney detection and real-time segmentation in 3d contrast-enhanced images." at the 9th IEEE International Symposium on Biomedical Imaging (ISBI) . In this method, input 3-D ultrasound images are enhanced to show the kidney as brighter than other regions. Prevost et al . then apply a 3-D deformable model to these images. In this method, the operator is required to manually select landmarks inside and/or outside the kidneys . Their method then directly imposes the selected landmarks on the energy
functional to prevent the segmentation process from becoming trapped in a local minima. The success rate of this approach depends on the guality of the input contrast enhanced ultrasound images. The performance of this approach has not been reported on 3-D ultrasound images. Therefore, similar to the method proposed by Fernandez and Lopez, the method proposed by Prevost et al . does not provide an automated and rapid method to aid a FAST examiner in kidney segmentation and trauma detection.
[0014] There is therefore a need to mitigate, if not
overcome, the shortcomings of the prior art and to, preferably, provide an automated approach for segmenting kidneys for FAST examiners lacking expertise and knowledge in ultrasound imaging and human anatomy.
SUMMARY OF INVENTION
[0015] The present invention provides systems, methods, and devices relating to the detection and characterization of a patient's kidney using 3-D ultrasound. An operator is provided with instructions for proper placement of a probe on a patient. The system determines if the probe is properly placed, otherwise, the operator is continuously prompted to properly place the probe. Ultrasound images are obtained using the probe and the patient's kidney is detected in the images using an organ atlas database. Once the kidney is detected, if the probe is misaligned, corrective instructions for the proper alignment of the probe are sent to the operator. [0016] The present invention provides a computer assisted probe placement that facilitates the focused
assessment with sonography in trauma (FAST)
examination by non-trained operators using 3D ultrasound systems.
[0017] The present invention provides an automated ultrasound probe placement for FAST examination.
[0018] The present invention facilitates FAST examination by non-trained operators in emergency situations.
[0019] In a first aspect, the present invention provides a method for detecting a presence of trauma in a patient, the method comprising: a) providing an operator with an indication of a location on said patient for an initial placement of a probe; b) obtaining images of an internal area of said patient using said probe; c) determining if at least one specific organ is present in images obtained in step b) ; d) in the event said at least one specific organ is not present in said images, repeating steps a) - c) until said at least one specific organ is present in said images; e) determining an alignment of said at least one specific organ in said images; f) determining if a desired view of said internal area is found in said images; g) in the event said desired view is not found in said images, relating said alignment of said at least one specific organ to an alignment of said probe; h) determining corrective instructions for probe placement based on results of step g) , said corrective instructions being operative to adjust a view of said internal area to thereby provide said operator with a view with a correct alignment of said at least specific organ; i) sending said corrective instructions to said operator ; j ) obtaining images of said internal area of said patient using said probe; k) determining if at least one specific organ is present in images obtained in step j);
1) repeating steps e) - k) until said desired view is found in said images. In a second aspect, the present invention provides a method for compiling a database for use in detecting a presence of a specific organ in an ultrasound image, the method comprising: a) pre-processing at least one training image; b) manually segmenting an organ in said at least one training image; c) generating an average shape model for said organ based on said at least one training image and said organ segmented in step b) ; d) extracting texture features from said organ segmented in step b) ; e) training kernel based support vector machines to classify portions of images as being organ or non-organ based on said at least one training image; f) storing said kernel based support vector machines and said organ average shape model in said database. In a third aspect, the present invention provides a system for detecting an organ in at least one
ultrasound image of an internal area of a patient, the system comprising:
- processor for processing said at least one ultrasound image;
- data storage containing a database, said database including :
- a reference image of said organ;
- at least one training image containing at least one segmented organ;
- at least one kernel based support vector machine, said support vector machine being for classifying portions of said at least one image as being organ or non-organ based on said at least one training image; and
- an average shape model for said organ based on said at least one training image. BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The embodiments of the present invention will now be described by reference to the following Figures, in which identical reference numerals in different figures indicate identical elements, and in which:
Figure 1 shows the overall block diagram of the automated ultrasound probe navigation system;
Figure 2 shows a graphical representation of the initial probe placement;
Figure 3 schematically illustrates the steps for the first stage of generating a kidney atlas database, including selecting training ultrasound volumes, selecting a reference volume, registering the training ultrasound volumes on the reference volume, and manually segmenting the registered volumes;
Figure 4 is a block diagram for the second stage of generating a kidney atlas database including
generating the average kidney shape model;
Figure 5 schematically illustrates the steps in the third stage of generating a kidney atlas database of the present invention, including training spatially distributed Kernel-based Support Vector Machines (KSVM) ;
Figure 6 schematically details steps in kidney segmentation of input ultrasound volumes;
Figure 7 is a flow chart detailing the steps in an automated kidney segmentation process by generation of a binarized mask specifying voxels with a higher probability of being kidney tissue; Figure 8 shows the graphical user interface of the automated ultrasound probe navigation; and
Figure 9 shows the relation of the detected kidney orientation and ultrasound probe rotation.
^0023] The Figures are not to scale and some features may be exaggerated or minimized to show details of particular elements while related elements may have been eliminated to prevent obscuring novel aspects. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention.
DETAILED DESCRIPTION
[0024] In one aspect of the invention, the present invention involves an organ atlas database, an automated kidney segmentation process, an ultrasound probe navigation system, and an automated trauma diagnosis system to help guide a FAST examiner, or an untrained operator, to detect trauma. Input 3D ultrasound images are automatically compared with the 3D images from an organ atlas database using real-time image processing misalignment calculations to obtain a correct view. Automated organ segmentation using the organ atlas database is then performed and probe navigational commands are generated and sent to an operator to find the correct Morison's pouch view. After verifying that a correct view has been obtained, trauma is diagnosed using the segmented kidney. [0025] For the explanation and description below, the organ of interest will be referred to as a kidney and a desired view will be referred to as a Morison's pouch view. However, as should be clear to a person of skill in the art, the present invention may be applied to any other organ and to any desirable view of any anatomical area of a mammal.
[0026] Once the operator is provided with indications as to where to place the ultrasound probe, it should be clear that the ultrasound image gathered should include a view of the kidney. A partial or full image of the kidney should be present and this can be detected by the system. Detection of at least a portion of the kidney is possible as the kidney's unigue shape and positioning can be compared to predetermined reference images of the kidney stored in the database. If any portion of the acguired image is not found to contain at least a section of a kidney that conforms to the expected shape and location of the kidney, then the operator must have placed the probe at an incorrect location. The operator is thus prompted to start the process anew. Once the kidney has been detected in the acguired images, the placement and orientation of the kidney is compared to its expected location and orientation as determined by the training images in the database. By continuously comparing the shape of the acguired image of the kidney with the expected shape of a kidney,
translations and rotations to conform the shape of the acguired image of the kidney to the expected shape can be calculated. These calculated translations and rotations are then sent to the operator as corrective instructions for the placement of the probe. [0027] Figure 1 is a flowchart detailing the steps involved in a method according to one aspect of the invention. In this method, the operator is provided with automated guidance for a placement of the ultrasound probe (step 10) . The relevant ultrasound volumes are then gathered using the probe (step 20) . A kidney detection is then performed (Step 30) . Decision 40 then determines if the kidney has been detected or if the kidney exists in the ultrasound volumes. If a kidney has not been detected in the ultrasound images, the operator is again guided to a proper placement of the ultrasound probe. On the other hand, if the kidney has been detected, then a calculation of the kidney misalignment is performed (step 50). Decision 60 then determines if a proper Morison's pouch view has been achieved. In the event the proper view has not been achieved, the probe's misalignment is determined based on a misalignment of the kidney from the ultrasound images (step 70) . A corrective probe alignment command is then sent to the operator (step 80) and the method loops back to that of acguiring ultrasound images (step 20).
[0028] Returning to step 60, if the proper Morison's pouch view has been achieved, then the kidney in the ultrasound images is segmented (step 90) . This data is then used for an automated trauma diagnosis (step 100) .
[0029] In one aspect, the invention provides an operator with instructions or indications for a proper placement of the ultrasound probe. Ultrasound images are then gathered using the probe. Whether the probe is properly placed or not is automatically determined by analyzing the gathered images using the kidney as a reference point. If the kidney or a portion of the kidney is not detected from the ultrasound images, the operator is again prompted with instructions for a proper initial placement of the probe. If at least a portion of the kidney is detected, then calculations are performed to determine any misalignment of the kidney in the images. Any misalignment of the kidney is related back to a misalignment of the probe placement. Corrective instructions for the probe placement are then automatically provided to the operator. The process repeats until the images indicate a correct view of the area around the kidney. Using this correct view of the area, the kidney is segmented and the images are then used for trauma diagnosis .
[0030] Figure 2 is a sample graphical user interface (GUI) which may be used with the invention. The GUI presents the operator with a front image of a human body and an indication of where the ultrasound probe is to be placed on a patient's body for a proper initial placement. As can be seen in Figure 2, for a proper Morison's pouch view, the operator is guided to place the ultrasound probe at the intersection of the HS and mid-auxiliary lines to ensure that the right kidney is at least partially present and is in the correct orientation.
[0031] It should be noted that while the GUI illustrated in
Figure 2 or a variant thereof may be used, other implementations are also possible. As an example, the indication as to where to place the probe on the patient may actually be projected on the actual patient. Similarly, automated, pre-recorded auditory instructions, predetermined written instructions (with or without accompanying diagrams), and any other visual or auditory means to guide the operator may also be used. As with the GUI example, these initial instructions are provided to the operator so that the operator is enabled to locate a proper initial probe placement .
^0032] The automated navigation system then processes the acguired 3-D ultrasound images, also referred to as ultrasound volumes, to detect the kidney.
^0033] The position and orientation of the detected kidney are used to calculate potential ultrasound probe misalignment. In the event that probe misalignments are found, corrective instructions for the operator are calculated. The navigation system's instructions guide the operator to move or rotate the ultrasound probe toward the correct placement . Similar to the initial placement of the probe, the corrective instructions to the operator can take multiple forms . The operator can be guided by a projection of the calculated misalignment onto the region where the ultrasound probe is placed. Alternatively, the operator can be guided by an auditory navigation command sent to the operator. These corrective navigation commands can also be displayed on a GUI or projected on the patient. The corrective instructions can take the form of text or symbols presented to the operator using any suitable means.
^0034] Ultrasound probe misalignments are determined by the navigation system using an organ atlas database. The atlas database is generated in advance using manually segmented organs in a training set of 3-D ultrasound images . The organ atlas database stores information required for segmenting an organ of interest in input ultrasound volumes. Such information may include: the reference volume, segmented organs in the training set of ultrasound images, the organ average shape model, and trained spatially distributed kernel based support vector machines (KSVMs) .
[0035] The organ atlas database-based approach consists of
(i) generating an organ atlas database, and (ii) segmenting the organ of interest in input 3-D ultrasound images. Essentially, the organ atlas database is used to determine if a potential organ of interest in the input image (i.e. the image obtained with the probe) is a kidney and if the kidney is in a proper alignment in the image. It should be noted that, in one implementation of the invention, the organ atlas database is part of the navigation system. The database is generated separately and is used whenever the system is used.
[0036] To generate an organ atlas database, multiple steps are involved and these steps are outlined in the schematic charts of Figures 3, 4 and 5. In the description of the steps for generating such an organ atlas database, the organ of interest will be the right kidney and the desired view will be, as above, a view of Morison's pouch. Of course, other organs of interest and other desired views may be used. Generating the requisite organ atlas database involves three main steps: a) pre-processing training volumes and segmenting these training volumes, b) generating an organ average shape model, and c) extracting features from the training volumes and training KSVMs. These three main steps are illustrated schematically in Figures 3, 4 and 5. [0037] In Figure 3, the sub-steps for the main step of preprocessing and manually segmenting training volumes are : i. Selecting a training set of 3-D ultrasound images ;
ii . Reducing speckle noise in the training set of 3-D ultrasound images;
iii . Removing intensity inhomogeneity field using a bias correction approach;
iv . Selecting a reference volume;
v. Manually selecting corresponding points on source and reference 3-D Ultrasound Images (landmark selection) ;
vi . Registering the training set volumes on the reference volume via a rigid-body transformation ;
vii . Manually segmenting the registered
volumes ;
[0038] In Figure 4, the main step is that of generating an organ average shape model. The sub-step is that of storing the segmentations as binarized masks in the atlas database.
[0039] In Figure 5, the main step is that of extracting
texture features. This is accomplished by: i) Extracting features as 3-D volumes from a
registered volume using Gabor filters; ii) Extracting sub-volumes from 3-D feature volumes; iii) Training Kernel-based Support Vector Machines (KSVMs) for sub-volumes, and storing trained KSVMs in the atlas database; iv) Using binarized masks of the manually segmented kidneys to generate an organ average shape model, and storing the average shape model in the atlas database.
[0040] Referring to the steps illustrated in Figure 3, the process begins with the selection of a training set of 3-D ultrasound images (step 200). Each training ultrasound volume, V[r r where i e {1,2, . . . , Ntraining} , should have a representative organ, such as a kidney, represented. A graphical user interface (GUI) designed to allow a user to select and load ultrasound volumes from the computer storage is used in one
implementation of the invention.
[0041] After the training volumes have been selected, speckle noise of each training ultrasound volume is reduced using an anisotropic diffusion filter (step 210).
Partial differential eguations (PDE) are employed to spatially control smoothing power based on the distance of voxels to object edges. Voxels located away from object edges are more highly smoothed. Voxels close to object edges are only smoothed in the direction parallel to the object edge. The anisotropic diffusion filter operates based on the following PDE formula,
Eguation (1) ^^j^ = div[c(q(x,y,z,-t)) VVds(x,y,z,-t)], where V and div are gradient and divergence operators, respectively. C(.) is the diffusion coefficient, and q (x, y, z; t) is the instantaneous coefficient of
variation, which is itself a function of the magnitude of the volume gradient, and intensifies edges. Vds is the despeckled volume and is initiated as Vds (x, y, z; t = 0) = V(x,y,z). The expression c(q(x,y,z; t)) is defined using the following eguations :
Eguation (2) c(q(x,y,z; £)) =—r1 λ , , ,
+ [¾§ω(ι+¾§ω)] and,
Eguation (3) q(x,y,z;t)
Figure imgf000019_0001
where q0( ~ QOexP[ t] is the speckle scale function, and q0 is the speckle coefficient of variation. q x,y,z; t) operates as an edge detector. Details regarding discretization can be found in reference [9]
identified below.
[0042] In step 220, the intensity inhomogeneity of the input ultrasound volume is removed. This may also be referred to as "bias correction". The intensity inhomogeneity is modeled as a multiplicative bias field,
Eguation (4) Vds(x,y,z) = Vbc(x,y,z)f(x,y,z)r where f(x,y,z) is a bias field, and Vbc(x,y,z) is the bias corrected volume .
[0043] To linearize eguation (4), a logarithmic
transformation is applied,
Eguation (5) Vds(x,y,z) = Vbc(x,y,z) +f(x,y,z) , where Vds = log(Vds , Vbc = log (Vbc), and / = log (/) . Now, let Pvdsr Pvbcr and Pf be the probability density functions of Vds, Vbc, and /, respectively. The probability density functions are related as, Equation
Figure imgf000020_0001
;0044] Pvds is calculated at each voxel based on the histogram of a sub-volume centered at the voxel. Pvbc is unknown, and Pf is simplified to be a Gaussian distribution with zero mean and an unknown variance. By taking the Fourier transform of the probability density
functions, Pvbc r which is the Fourier transform of yh , is approximated as follows,
Equation (7) PVbc= 2+z2Pyds where Pf and Pv are Fourier transforms of Pf and Pvds r respectively. Then, an estimation of / is achieved as,
Equation (8) fes(x,y,z) = Vds(x,y,z) -
Figure imgf000020_0002
where ,
Equation (9)
Figure imgf000020_0003
S Vbc(x,y,z)Pf(vds(x,y,z)- Vbc (x,y,z))PVbc( bc(x,y,z)^dVbc(x,y,z)
f- o Pf(vds{x,y,z)- Vbc{x,y,z))pVbc( bc{x,y,z))dVbc{x,y,z)
[ 0 045 ] After estimating f(x,y,z) at all voxels, Vbc(x,y,z) is
recovered using equation ( 5 ) , Vbc(x, y, z) = exp (vbc(x, y, z) is obtained.
[ 004 6 ] The training ultrasound volumes, V^h = Vb l c, where
i e {1, ... , NTraining}, are then enhanced, and a reference volume is selected (step 230 ) . It should be noted that, to allow a user to perform the selection, a suitable user interface may be used. Preferably, the best quality ultrasound volume is selected as the reference volume.
[0047] Once the reference volume has been selected, all other training ultrasound volumes are then registered onto the reference volume using a landmark-based rigid-body registration (step 240). This preserves the
variability in a given organ shape among individuals while removing training ultrasound volume
misalignments. For optimal results, a GUI designed to allow the operator to manually select landmarks can be used. Such a GUI may provide the ability to select pair-voxels on the reference volume, ef i an each training ultrasound volume, X^rc .
[0048] After selecting landmarks for each training ultrasound volume, an affine transformation is fitted on the selected landmarks, with an aim to register the training ultrasound volume on the reference volume.
The registered volume is Vrreg · For example, an affine transformation with 7 parameters may be applied, such as 3 translations, 3 rotations and 1 scale, as = [mx, .y, mz, θχ, 9y, θζ, s] . The affine transformation matrix is defined as follows,
Equation (10) Xk src = AXkref where,
Equation (11) A =
Figure imgf000021_0001
where Rx, Ry, and Rz are rotation matrices of the x-, y- and z- axes, respectively. Matrix A is defined to minimize the following error,
Figure imgf000022_0001
Eguation (12)
[0049] In the next step (step 250), the organs, such as
kidneys, in the registered training volumes are manually segmented. The results are stored as binarized masks, Bn where n e {1, ... ,NTraining} . In these binarized masks, the image consists of Is inside the organ and 0s outside of the organ as shown in one section of Figure 4. These results are stored in the organ atlas database (step 260).
[0050] After pre-processing and manually segmenting the
training volumes, a kidney average shape model is generated to initialize kidney segmentation (Figure 4) . This may result in a faster and more accurate segmentation. The organ's average shape model is generated using the manual segmentations that are already aligned, as described above and illustrated in Figure 3 . As such, the voxel-wise average is
calculated on the segmentations using, yNTraining n . .
Eguation (13) KASM(x,y,z) = ^i=J il
N Training where KASM is the kidney average shape model. KASM is then saved in the kidney atlas database (Figure 4) .
[0051] Once the average shape model has been developed,
texture features from the registered ultrasound volumes can be extracted. The texture features are joined with the manual segmentations and these features are then used to train KSVM classifiers to automatically segment the kidney portion of an input ultrasound volume.
^0052] Gabor filters are highly effective for extracting
texture features in image analysis applications. 3-D Gabor wavelets, sinusoidal waves modulated by 3-D Gaussian functions, may be used to extract texture features from ultrasound volumes as follows,
Equation (14) Gffii (x, y, z) =
Figure imgf000023_0001
and,
Equation (15) u = fsin<Pcos9, v = fsin<Psin9
Figure imgf000023_0003
Figure imgf000023_0002
where S and / are a normalization scale and the amplitude of the complex sinusoids, respectively. σχ, Oy , and σζ are the Gaussian envelop widths in the x-,y- , and z- axes, respectively. R is the 3-D rotation matrix with 3 parameters: θχ, 9y, and θζ .
^0053] A simplification of the 3-D Gabor wavelet equation may be applied, σχ = oy= σζ = σ. Each extracted feature from a registered training ultrasound volume, Vr l eg, has a specific set of parameters [<?,θχγζ], and is named
Ρσθχθγθζ· Any number can be selected to define o,9x,9y, or θζ . For example, if θχ, 9y, θζ e {0,45,90,135} and
oe {0.4,0.6}, it would result in 2 * 43 = 128 features, F{ = F^Wz\le[l 128]}.
[0054] Texture information of a given organ, such as a
kidney, highly changes throughout an ultrasound volume. Therefore, using a single classifier may not be effective to classify kidney and non-kidney tissues in ultrasound volumes. To overcome this problem, texture information may be locally classified by dividing the entire volume into a set of sub-volumes . In such a case, each volumetric feature, F{ , may be divided into
K sub-volumes, K'k where ke{l,...,K}. Furthermore, the sub-volumes may be divided unevenly, or evenly such as with 50 percent overlapping. For each pair of sub- volumes k and features I, a sub-set of NTra£n£n5-related sub-volumes, {F; 1,fe,
Figure imgf000024_0001
may collected.
[0055] Figure 5 schematically shows the remaining stages of generating the kidney atlas database. As can be seen, the various stages are applied in parallel to multiple datasets once the relevant texture features have been extracted using the Gabor wavelets . The sub-volumes are extracted and then the related sub-volumes in the sub-sets are converted into vectors in a process called vectorization . In vectorization, vectors in each sub-set are vertically concatenated to create a single vector for each sub-set, Fk . Then, vectors related to each sub-volume are horizontally
concatenated and form a sub-volume feature matrix, Mk = [Fk, ... , Fk] . Following the previous example, the matrix would be Mk = [F^,^, ... , F^28] . [0056] A KSVM classifier may then be used to classify voxels into kidney and non-kidney tissues. Rather than minimize the classification error, KSVM provides a maximum margin distance between two classes to obtain a maximum generalization ability. In addition, using a kernel-based operation, KSVM is able to solve nonlinear classification problems by mapping samples into a higher dimensional space.
[0057] In one embodiment of the present invention, the
Gaussian kernel function may be selected for the KSVMs . To train spatially distributed KSVMS, the manual segmentations may be used to specify classes of each row in each sub-volume feature matrix, Mk . Thus, sub-volumes, Bl,k, are extracted from each Bl where i e [1, ... , NTraining\ and k e {1, ... , K] . Each sub-volume, B l,k , is then vectorized and sub-volumes related to the same k are concatenated, creating Bk for all k e {l, ... , K} . Following these calculations, Mk and Bk may both be used to train KSVMk . The trained KSVM classifiers,
{KSVMt, ... , KSVMK] , may then be stored in the atlas database .
[0058] The above described atlas database is applied with the automated trauma diagnosis system to segment the organ of interest and diagnose trauma (Figure 7) . The segmenting of organs in the input 3-D ultrasound images involves a number of steps, as follows: i. Reducing speckle noise from the training set of 3-D ultrasound volumes and removing intensity inhomogeneity field using a bias correction approach;
ii . Registering the kidney in the input 3-D ultrasound image on the kidney in the reference volume (stored in the atlas
database), using a rigid transformation; iii . Extracting features as 3-D volumes from the registered volume using Gabor filters; iv . Extracting sub-volumes from 3-D feature volumes ;
v. Using trained KSVM in the atlas database to classify voxels into kidney (ones) and non-kidney (zeros) voxels;
vi . Initializing a deformable model with the kidney average shape model (stored in the atlas database) ;
vii . Concatenating classified sub-volumes to form a binarized mask;
viii . Initializing a level-set function by the generated kidney average shape model;
ix . Deforming the level-set function via a rigid transformation to fit the level-set function inside the binarized mask;
x. Applying level-set propagation to
accurately segment kidneys .
xi . Fitting the deformable model on the
classification result of step (6),
providing the final segmented kidney.
[0059] Steps iii to vii above are illustrated schematically in Figure 6.
[0060] To segment the organ, first, the ultrasound volumes, Vin , are enhanced by reducing speckle noise. This may be effectuated by an anisotropic diffusion filter. Bias correction is then performed to remove intensity inhomogeneity . The enhanced volume is noted by . [0061] Second, the misalignment of a kidney in an input ultrasound volume with respect to the kidney in the reference volume is removed. This may be performed by applying an automated rigid-body registration. Since the trained classifiers are spatially distributed, a misalignment in the kidney position results in failing to correctly detect voxels pertinent to the kidney tissue. For this purpose, a landmark-based
registration cannot be used as kidney segmentation should preferably be automated. Thus, an optimal direction strategy must be applied to modify the rigid-body registration parameters .
[0062] The rigid-body registration uses an affine
transformation with several parameters. For example, the affine transformation may include 7 parameters such as 3 translations, 3 rotations, and 1 scaling.
The parameters vector, 9lter, at each iteration would then consist of θη e {tx, ty, tz, θχ, 9y, θζ, s} where ne{l,...,7}. A parameters vector is then initialized with 9° =
[0,0,0,0,0,0,1]T and is iteratively modified to maximize the fitness of the reference volume, ', with the input ultrasound volume, . At each iteration, each parameter, θη, is modified as follows,
Equation (16)
Figure imgf000027_0001
where ||-||i is the norm-1, and δη is the changing step of the nth parameter of the affine transformation, a adjusts the updating weight of parameters. The iterative process may continue until there are no further improvements. The registered input volume is referred to as Vr l g , and the calculated registration parameters are saved as 9^eg .
3-D Gabor wavelets may then be used to extract features from VR L^GR for example {F 3, ... , F^Q]■ Sub-volumes may then be extracted from the extracted features, for example, [f( re5'fe] where k e {l, ... , K] and I e {1, ...,128} .
The extracted sub-volumes may then be vectorized and concatenated for each sub-volume to generate the features matrix, Mre9,k .
Next, voxels in each sub-volume are classified into kidney and non-kidney tissues using the related trained spatially distributed classifier, KSVMk . The feature matrix Mrea,k is the input to KSVMk . The output is a classification result as a vector of 'zero's and 'one's. The vector is reshaped to form a 3-D binarized sub-volume. All binarized sub-volumes may then be combined to generate the 3-D binarized mask, Brea , as shown in Figure 6 .
Once the binarized mask has been generated, a
deformable model with the average kidney shape model, stored in the kidney atlas database, must be
initialized. A combination of global and local deformations may be applied to finely segment kidneys. An implicit representation, Φ e R , may be used to define the deformable model. For example, <P(x, y, z; t) > 0 specifies voxels inside the segmentation region, <P(x, y, z; t) = 0 specifies voxels on the segmentation boundary, and 0(x, y, z; t) < 0 specifies voxels outside the segmentation region. In this example, <P(x, y, z; t = 0) = 2 x (KASM(x, y, z)— 0.5) must be set for initializing. An affine deformation according to eguation (16) must then be applied to fit <P(x,y,z;t) = 0 on the binarized mask, Brea .
[0067] The calculated affine deformation to globally fit
0(x,y,z;t) = 0 on the binarized mask, Brea, is represented by 7 parameters as §?eg = [tx, ty, tz, θχ, 9y, θζ, s] . The sum of
&reg and ®reg is the calculated misalignment of the kidney image with respect to the reference kidney shape, 9misangnment 9reg + Qreg .
[0068] The affine deformation is a global deformable model used to align the deformable model on the organ of interest such as a kidney (highlighted in the binarized mask). Region-based level-set propagation must then be applied as a local deformable model to finely segment the kidney. Considering the 3-D image domain to be Ω, average intensity levels of kidney to be clr and non-kidney regions to be c0, all calculated based on Vr l g and Bre9, the following regularized function is defined as,
Eguation (17)
Figure imgf000029_0001
t
+ v j Κ( (χ,γ,χ))Λπ ύ
j
n a i where V is the gradient operation, δ and H are Dirac delta and Heaviside functions, respectively, and μ, v, λ and λ2 are regulation parameters.
[0069] The Euler-Lagrange eguation may be used to minimize J
with respect to Φ, and the following propagation eguation may be obtained,
Eguation (18)
Figure imgf000030_0001
[0070] For organ segmentation, the above eguation must be iteratively repeated until a convergence is achieved. The segmented kidney in the registered volume is defined as V™g = (Φ > 0) . Finally, an inverse affine transformation may be applied to generate the kidney segmentation result in the original input ultrasound volume as Vs% = {AT^V^ .
[0071] To adjust for misalignments of the probe,
misalignments of the kidney in the resulting images have to be related to misalignments of the probe. As noted above, the navigation system will continuously provide the operator with commands to adjust the positioning and/or orientation of the probe to arrive at the desired view of the specific internal area of the patient. Figure 8 schematically illustrates the navigation window that provides the operator with instructions in one embodiment of the invention. As well, Figure 8 also shows that, in this embodiment, the probe is provided with a marker to indicate a specific probe orientation (e.g. a "top" for an initial probe placement) . [0072] To correct for misalignments, it should be noted that the calculated registration parameter Θ is related to the probe misalignment on the patient body. The automated probe navigation system of the present invention may begin by asking the operator to compensate for improper orientation of the ultrasound probe from the initial placement to remove θχ by rotating the ultrasound probe. The coordinate system used for the following explanation is illustrated in Figure 9. After removing θχ, corrective probe
translation commands can be sent to the operator.
[0073] From the image of the kidney, if the x-axis is
misaligned, it is due to the depth of the kidney inside the patient's body rather than to probe misalignment. Therefore, only ty and tz should be considered. ty represents the misalignment of the ultrasound probe in the direction of mid-auxiliary line toward cephalad and tz represents the misalignment perpendicular to the mid-auxiliary line. Translation commands are iteratively sent to the operator until the desired view of Morison's pouch is attained.
[0074] The kidney image misalignment, emisalignment, is obtained in two steps: (1) aligning the enhanced input 3D image on the reference 3D image and (2) aligning the binarized 3D image on the expected alignment of the kidney shape model in the generated atlas database. Step (1) provides a rough alignment of the kidney shape image on the expected kidney alignment, while step (2) performs a fine alignment of the input kidney image on to the reference kidney shape. It should be noted that step (1) is crucial for step (2) to correctly work. If step (1) is not properly done, the KSVM classifiers would not be able to properly generate the binarized volume.
[0075] For clarity, it should be clear that the probe
navigation system always tracks on the calculated kidney image misalignment and stops probe navigation when the misalignment falls within a predetermined acceptable range. The kidney image misalignment,
^misalignment = [tx> ty> tz> x> y> z> s] r i-s recalculated after each time the operator moves and/or rotates the ultrasound probe. If the translation and orientation parameters, including tx, ty, tz, θχ, 9y and θζ , are smaller than their related threshold values, the navigation system concludes that the probe is aligned within an acceptable range from the reference kidney alignment and the navigation process stops. The threshold values for the translation and orientation parameters are set a s [Ttx = 10PX> Tty = 10PX> Ttz = lOpx, Τθχ = 5°, Tgy = 5°, Τθζ = 5°] .
[0076] In another aspect of the invention, there is provided a system for automated organ detection using
ultrasound images. The system includes an ultrasound imaging sub-system which has a probe for use in gathering the ultrasound images. As well, the system includes at least one processor for processing the gathered images and for analyzing these images. Finally, the system includes an organ atlas database that has reference images of the organ, trained classifiers which determine if portions of the image are organ tissue or not, as well as an average organ shape model. These reference images, classifiers, and average organ shape model are used by the system to detect the presence (partial or full) of the organ in the gathered images . The database can be stored in any suitable data storage device or system such as a hard drive, solid state drive, or an online storage system .
[0077] Various aspects of the invention may be implemented as a built-in package in a 3D ultrasound machine or as an add-on package installed on a personal computer connected to a 3D ultrasound imaging device. As an add-on package, any 3D ultrasound machine which supports the ultrasound research interface (URI) or any other protocol exporting ultrasound raw data may be used with the invention. Examples of suitable ultrasound imaging devices for use with the various aspects of the invention include the Siemens SONOLINE Antares ultrasound system and the Hitachi HiVision 5500.
[0078] To better understand the invention, the following
references may be consulted. These references are hereby incorporated by reference in their entirety.
[1] S. Stergiopoulos , Advanced Signal Processing Handbook: Theory and Implementation for Radar, Sonar, and Medical Imaging Real-Time Systems, CRC Press, Inc, 2009.
[2] S. Stergiopoulos and P. Shek, "portable 4d ultrasound diagnostic imaging system, " in IEEE UFFCS, 2011.
[3] S. Stergiopoulos and A. Dhanantwari, "High Resolution 3D Ultrasound Imaging System Deploying a Multi-Dimensional Array of Sensors and Method for Multi-Dimensional Beamforming Sensor Signals". US Patent 6,719,696, 13 April 2004. [4] S. Stergiopoulos and A. Dhanantwari, "High
Resolution 3D Ultrasound Imaging System Deploying a Multi-Dimensional Array of Sensors and Method for Multi-Dimensional Beamforming Sensor Signals". US Patent 6,482,160, 19 November 2002.
[5] G. M. Treece, "Volume measurement and surface visualisation in seguential freehand 3D ultrasound, " Doctoral dissertation, Ph. D. Thesis, Cambridge
University, 2000.
[6] Y. Yu and S. Acton, "Speckle reducing anisotropic diffusion, " IEEE Transactions on Image Processing, vol. 11, no. 11, pp. 1260-1270, 2002.
[7] A. Belaid, D. Boukerroui, Y. Maingourd and L. J. F., "Implicit active contours for ultrasound images segmentation driven by phass information and local maximum likelihood.," in IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2011.
[8] M. Martin-Fernandez and C. Alberola-Lopez , "An approach for contour detection of human kidneys from ultrasound images using Markov random fields and active contours," Medical image analysis , vol. 9, no. 1, pp. 1-23, 2005.
[9] R. Prevost, B. Mory, J. Correas, L. D. Cohen and R. Ardon, "Kidney detection and real-time segmentation in 3d contrast-enhanced ultrasound images.," in 9th IEEE International Symposium on Biomedical Imaging (ISBI), 2012.
[10] S. Stergiopoulos, P. Shek, K. Plataniotis and M. M, "Computer Aided Diagnosis for Detecting Abdominal Bleeding with 3D Ultrasound Imaging" . US Patent
Application 14/159744, 21 January 2014. [11] H. Shokoohi, K. S. Boniface and A. Siegel,
"Horizontal subxiphoid landmark optimizes probe placement during the Focused Assessment with
Sonography for Trauma ultrasound exam, " European Journal of Emergency Medicine, pp. 1-5, 201 1.
[12] J. G. Sled, A. P. Zijdenbos and A. C. Evans, "A Nonparametric Method for Bias Correction of Intensity Non-unifomiity in MRI Data, " IEEE Transaction on Medical Imaging, vol. 17, no. 1, pp. 87-97, 1998.
[13] B. S. Manjunath and W. Y. Ma, "Texture Features for Browsing and Retrieval of Image Data, " IEEE
TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 18, no. 8, pp. 837-842, 1996.
[14] L. Shen and L. Bai, "3D Gabor wavelets for evaluating SPM normalization algorithm, " Medical Image Analysis, vol. 12, pp. 375-383, 2008.
[15] Y. Zhan and D. Shen, "Deformable Segmentation of 3-D Ultrasound Prostate Images Using Statistical Texture Matching Method, " IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 25, no. 3, pp. 256-272, 2006.
[16] C. J. C. Burges, "A Tutorial on Support Vector Machines for Pattern Recognition, " Data Mining and Knowledge Discovery, vol. 2, pp. 121-167, 1998.
[17] T. F. Chen and L. A. Vese, "Active Contours Without Edges," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, no. 2, pp. 266-277, 2001. The method steps of the invention may be embodied in sets of executable machine code stored in a variety of formats such as object code or source code. Such code is described generically herein as programming code, or a computer program for simplification. Clearly, the executable machine code may be integrated with the code of other programs, implemented as subroutines, by external program calls or by other technigues as known in the art.
[0080] The embodiments of the invention may be executed by a computer processor or similar device programmed in the manner of method steps, or may be executed by an electronic system which is provided with means for executing these steps . Similarly, an electronic memory means such computer diskettes, CD-ROMs, Random Access Memory (RAM) , Read Only Memory (ROM) or similar computer software storage media known in the art, may be programmed to execute such method steps. As well, electronic signals representing these method steps may also be transmitted via a communication network.
[0081] Embodiments of the invention may be implemented in any conventional computer programming language. For example, preferred embodiments may be implemented in a procedural programming language (e.g."C") or an object oriented language (e.g. "C++")· Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
Embodiments can be implemented as a computer program product for use with a computer system. Such
implementations may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD- ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or electrical
communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be
transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server over the network (e.g., the Internet or World Wide Web) . Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention may be implemented as entirely hardware, or entirely software (e.g., a computer program product) . Embodiments of the invention may also be implemented using sequential or parallelized programming methods. Parallelized implementations of the methods and processes of the invention may be used with multi-core processors, such as Intel Xeon processors or Intel Extreme processors, or they can be implemented using GPU processors based on CUDA language. In one variant, the generated software package may be used either as a solution built into a 3D ultrasound imaging device or as an installable or executable addon software on a personal computer (including desktop workstation or laptop) connected to a 3D ultrasound imaging device supporting the ultrasound research interface or any other protocol to transfer ultrasound raw data. A person understanding this invention may now conceive of alternative structures and embodiments or
variations of the above all of which are intended to fall within the scope of the invention as defined in the claims that follow.

Claims

What is claimed is:
1. A method for detecting a presence of trauma in a patient, the method comprising: a) providing an operator with an indication of a location on said patient for an initial placement of a probe ; b) obtaining images of an internal area of said patient using said probe; c) determining if at least one specific organ is present in images obtained in step b) ; d) in the event said at least one specific organ is not present in said images, repeating steps a) - c) until said at least one specific organ is present in said images ; e) determining an alignment of said at least one specific organ in said images; f) determining if a desired view of said internal area is found in said images; g) in the event said desired view is not found in said images, relating said alignment of said at least one specific organ to an alignment of said probe; h) determining corrective instructions for probe
placement based on results of step g) , said corrective instructions being operative to adjust a view of said internal area to thereby provide said operator with a view with a correct alignment of said at least specific organ; i) sending said corrective instructions to said operator; j ) obtaining images of said internal area of said patient using said probe; k) determining if at least one specific organ is present in images obtained in step j ) ;
1) repeating steps e) - k) until said desired view is found in said images.
2. The method according to claim 1, wherein said at least one organ comprises a kidney.
3. The method according to claim 1, wherein said probe is an ultrasound probe.
4. The method according to claim 3, wherein said images are ultrasound images .
5. The method according to claim 4, wherein said images are 3-D ultrasound images.
6. The method according to claim 1, wherein step e) includes the steps of :
- reducing noise in said image; and
- correcting for intensity bias in said image.
7. The method according to claim 1, wherein step e) is executed by referring to an organ atlas database, said database containing at least one average organ shape model.
8. The method according to claim 7, wherein said at least one average organ shape model is used in at least one of steps c) and e ) .
9. The method according to claim 1, wherein step c) includes using a global deformation to detect said organ in said images .
10. The method according to claim 9, wherein said global deformation is based on an affine registration of an organ in said image to a reference image in said database.
11. The method according to claim 1, wherein step c) includes using a local deformation to detect said organ in said images.
12. The method according to claim 11, wherein said local deformation is based on a region-based level-set method for deforming an organ in said image to register with a reference image in said database .
13. The method according to claim 1, wherein said method uses at least one of :
- 3-D Gabor wavelets as 3-D filters for extracting features of said organ in said images;
- localized classification of portions of said images to detect said organ; and
- spatially distributed classifiers to classify portions of said images to detect said organ.
14. A method for compiling a database for use in detecting a presence of a specific organ in an ultrasound image, the method comprising: a) pre-processing at least one training image; b) manually segmenting an organ in said at least one training image; c) generating an average shape model for said organ based on said at least one training image and said organ segmented in step b) ; d) extracting texture features from said organ segmented in step b) ; e) training kernel based support vector machines to classify portions of images as being organ or non-organ based on said at least one training image; f) storing said kernel based support vector machines and said organ average shape model in said database .
15. The method according to claim 14, wherein said database further contains at least one of :
- a reference image of said organ; and
- at least one segmented organ in said at least one training image.
16. The method according to claim 14, wherein step a) includes reducing noise in said at least one training image.
17. The method according to claim 14, wherein step a) includes applying a bias correction to remove intensity inhomogeneity in said at least one training image.
18. The method according to claim 14, further including a step of registering said at least one training image to a reference image of said organ.
19. The method according to claim 17, wherein said step of registering uses a landmark-based rigid-body registration.
20. A system for detecting an organ in at least one
ultrasound image of an internal area of a patient, the system comprising :
- processor for processing said at least one ultrasound image ;
- data storage containing a database, said database including : - a reference image of said organ;
- at least one training image containing at least one segmented organ;
- at least one kernel based support vector machine, said support vector machine being for classifying portions of said at least one image as being organ or non-organ based on said at least one training image; and
- an average shape model for said organ based on said at least one training image.
PCT/CA2015/050179 2015-03-09 2015-03-09 Computer-assisted focused assessment with sonography in trauma WO2016141449A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CA2015/050179 WO2016141449A1 (en) 2015-03-09 2015-03-09 Computer-assisted focused assessment with sonography in trauma

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CA2015/050179 WO2016141449A1 (en) 2015-03-09 2015-03-09 Computer-assisted focused assessment with sonography in trauma

Publications (1)

Publication Number Publication Date
WO2016141449A1 true WO2016141449A1 (en) 2016-09-15

Family

ID=56878546

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2015/050179 WO2016141449A1 (en) 2015-03-09 2015-03-09 Computer-assisted focused assessment with sonography in trauma

Country Status (1)

Country Link
WO (1) WO2016141449A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112469340A (en) * 2018-07-26 2021-03-09 皇家飞利浦有限公司 Ultrasound system with artificial neural network for guided liver imaging
EP3811867A1 (en) * 2019-10-21 2021-04-28 Koninklijke Philips N.V. System for image processing
US11464477B2 (en) 2017-03-06 2022-10-11 Thinksono Ltd Blood vessel obstruction diagnosis method, apparatus and system
CN115910379A (en) * 2023-02-03 2023-04-04 慧影医疗科技(北京)股份有限公司 Kidney stone postoperative curative effect evaluation method, system, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120027272A1 (en) * 2010-07-30 2012-02-02 Akinola Akinyemi Image segmentation
US20120051607A1 (en) * 2010-08-24 2012-03-01 Varian Medical Systems International Ag Method and Apparatus Regarding Iterative Processes as Pertain to Medical Imaging Information
US20130190600A1 (en) * 2012-01-25 2013-07-25 General Electric Company System and Method for Identifying an Optimal Image Frame for Ultrasound Imaging
WO2014063746A1 (en) * 2012-10-26 2014-05-01 Brainlab Ag Matching patient images and images of an anatomical atlas
WO2014097090A1 (en) * 2012-12-21 2014-06-26 Koninklijke Philips N.V. Anatomically intelligent echocardiography for point-of-care
WO2014207642A1 (en) * 2013-06-28 2014-12-31 Koninklijke Philips N.V. Ultrasound acquisition feedback guidance to a target view
US20150026643A1 (en) * 2011-09-26 2015-01-22 Koninklijke Philips N.V. Medical image system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120027272A1 (en) * 2010-07-30 2012-02-02 Akinola Akinyemi Image segmentation
US20120051607A1 (en) * 2010-08-24 2012-03-01 Varian Medical Systems International Ag Method and Apparatus Regarding Iterative Processes as Pertain to Medical Imaging Information
US20150026643A1 (en) * 2011-09-26 2015-01-22 Koninklijke Philips N.V. Medical image system and method
US20130190600A1 (en) * 2012-01-25 2013-07-25 General Electric Company System and Method for Identifying an Optimal Image Frame for Ultrasound Imaging
WO2014063746A1 (en) * 2012-10-26 2014-05-01 Brainlab Ag Matching patient images and images of an anatomical atlas
WO2014097090A1 (en) * 2012-12-21 2014-06-26 Koninklijke Philips N.V. Anatomically intelligent echocardiography for point-of-care
WO2014207642A1 (en) * 2013-06-28 2014-12-31 Koninklijke Philips N.V. Ultrasound acquisition feedback guidance to a target view

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BOSCH ET AL.: "Automatic Segmentation of Echocardiographic Sequences by Active Appearance Motion Models", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 21, no. 11, November 2002 (2002-11-01), pages 1374 - 1383, XP055309908 *
DANTAS ET AL.: "Ultrasound Speckle Reduction Using Modified Gabor Filters", IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL, vol. 54, no. 3, March 2007 (2007-03-01), pages 530 - 538, XP011175819 *
MOHAMED ET AL.: "Region of Interest Identification in Prostate TRUS Images Based on Gabor Filter", 2003 IEEE 46TH MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS, vol. 1, 27 December 2003 (2003-12-27), pages 415 - 419, XP010867483 *
PREVOST ET AL.: "Kidney Detection and Real-Time Segmentation in 3D Contrast-Enhanced Ultrasound Images", 9TH IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI, 2 May 2012 (2012-05-02), pages 1559 - 1562, XP032199329 *
TRAN ET AL.: "Automatic Detection of Lumbar Anatomy in Ultrasound Images of Human Subjects", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 57, no. 9, September 2010 (2010-09-01), pages 2248 - 2256, XP011343328 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11464477B2 (en) 2017-03-06 2022-10-11 Thinksono Ltd Blood vessel obstruction diagnosis method, apparatus and system
CN112469340A (en) * 2018-07-26 2021-03-09 皇家飞利浦有限公司 Ultrasound system with artificial neural network for guided liver imaging
EP3811867A1 (en) * 2019-10-21 2021-04-28 Koninklijke Philips N.V. System for image processing
WO2021078701A1 (en) * 2019-10-21 2021-04-29 Koninklijke Philips N.V. System for image processing
CN115910379A (en) * 2023-02-03 2023-04-04 慧影医疗科技(北京)股份有限公司 Kidney stone postoperative curative effect evaluation method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
Rueckert et al. Automatic tracking of the aorta in cardiovascular MR images using deformable models
Mattes et al. PET-CT image registration in the chest using free-form deformations
US7616818B2 (en) Method of determining the orientation of an image
US8837771B2 (en) Method and system for joint multi-organ segmentation in medical image data using local and global context
US7856130B2 (en) Object recognition system for medical imaging
EP3444781B1 (en) Image processing apparatus and image processing method
Guest et al. Robust point correspondence applied to two-and three-dimensional image registration
US9002078B2 (en) Method and system for shape-constrained aortic valve landmark detection
Badakhshannoory et al. A model-based validation scheme for organ segmentation in CT scan volumes
US8417005B1 (en) Method for automatic three-dimensional segmentation of magnetic resonance images
RU2669680C2 (en) View classification-based model initialisation
Erdt et al. Automatic pancreas segmentation in contrast enhanced CT data using learned spatial anatomy and texture descriptors
Law et al. Efficient implementation for spherical flux computation and its application to vascular segmentation
WO2016141449A1 (en) Computer-assisted focused assessment with sonography in trauma
Ardon et al. Fast kidney detection and segmentation with learned kernel convolution and model deformation in 3D ultrasound images
Foroughi et al. Intra-subject elastic registration of 3D ultrasound images
Fritsch et al. Stimulated cores and their applications in medical imaging
Oguz et al. Cortical correspondence using entropy-based particle systems and local features
Wein et al. Automatic non-linear mapping of pre-procedure CT volumes to 3D ultrasound
Pluim et al. Multiscale approach to mutual information matching
Marsousi et al. Computer-assisted 3-D ultrasound probe placement for emergency healthcare applications
Frantz et al. Development and validation of a multi-step approach to improved detection of 3D point landmarks in tomographic images
Foroughi et al. Elastic registration of 3d ultrasound images
Pirnog Articular cartilage segmentation and tracking in sequential MR images of the knee
Marsousi Automated Kidney Segmentation in 3D Ultrasound Imagery, and its Application in Computer-Assisted Trauma Diagnosis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15884180

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15884180

Country of ref document: EP

Kind code of ref document: A1