WO2019016064A1 - TREATMENT OF A FETAL ULTRASONIC IMAGE - Google Patents

TREATMENT OF A FETAL ULTRASONIC IMAGE Download PDF

Info

Publication number
WO2019016064A1
WO2019016064A1 PCT/EP2018/068909 EP2018068909W WO2019016064A1 WO 2019016064 A1 WO2019016064 A1 WO 2019016064A1 EP 2018068909 W EP2018068909 W EP 2018068909W WO 2019016064 A1 WO2019016064 A1 WO 2019016064A1
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasound image
fetal ultrasound
image
spine
fetal
Prior art date
Application number
PCT/EP2018/068909
Other languages
English (en)
French (fr)
Inventor
Caroline Denise Francoise RAYNAUD
Laurence ROUET
Cybèle CIOFOLO-VEIT
Thierry Lefevre
David Nigel ROUNDHILL
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Priority to US16/630,919 priority Critical patent/US11341634B2/en
Priority to CN201880047971.8A priority patent/CN110945560B/zh
Priority to JP2020502410A priority patent/JP6839328B2/ja
Priority to EP18743435.2A priority patent/EP3655917B1/de
Publication of WO2019016064A1 publication Critical patent/WO2019016064A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo

Definitions

  • This invention relates to the processing of fetal ultrasound images.
  • 2D ultrasound is the preferred scanning protocol for biometry measurements, growth monitoring and anatomy assessment during pregnancy. Obtaining reproducible and accurate values requires strict guidelines to be followed, especially regarding the selection of the standard 2D viewing planes which are used to search for abnormalities or perform biometry measurements such as head and abdomen
  • 3D US is a more recent imaging technique that has the potential to overcome some of the above mentioned difficulties.
  • the acquisition of a single 3D volume makes it possible to select the required viewing planes.
  • the clinicians can perform offline reading and, if necessary, adjust the position of the extracted planes (called "clinical planes") prior to standard measurements.
  • Views of interest can be selected from 2D US sequences by classifying the content of each frame to determine if it corresponds to a standard plane using a radial component model or a pre-trained recurrent neural network.
  • Document US 2010/099987 Al describes an ultrasonic image acquisition and diagnosis display technique in which the spine of a fetus included in a 3D volume scan is used as a landmark for recognizing the position of the heart in the chest of the fetus with high accuracy.
  • Another 2D approach consists in fitting geometrical templates built at multiple resolutions and orientations in order to label the anatomical content. There remains a need for a more reliable automated approach for extracting information of interest from a 3D fetal US image.
  • a computer implemented method for processing a 3D fetal ultrasound image comprising:
  • a first reference axis is based on the spine, and it may be the direction of the spine in the middle of the spine, or it may be the direction of the vector connecting identified end points of the spine, for example.
  • the spine is a well-defined, unique structure within the fetus that will be readily recognizable to the ultrasound system, particularly when using image recognition techniques.
  • the second reference axis enables volume rotation about the spine (i.e. about the first reference axis), so that a 3D orientation of the fetus is identified.
  • the updated image may be a rotated version of the image, or it may be the image with additional data annotations identifying the reference axes.
  • the method may be implemented in real time, so that the 3D ultrasound image is obtained by an ultrasound scan as part of the method. Alternatively, the method may be applied to a previously captured image.
  • identification of landmark points (such as organs) within the image is made possible by an automated machine learning approach.
  • the method may for example be used to process samples of a training database to provide a machine learning operation, and then the machine learned algorithm may be used to process an unseen captured image (i.e. one not included already processed to create the training database) to provide automated identification.
  • the invention may be considered to provide an image filtering or normalization process which can be applied before applying machine learning approaches in order to perform automatic extraction of the clinical planes of interest.
  • the determining of the second reference axis comprises:
  • the spine is at one edge of the abdomen/thorax, so the vector between the spine and the center of the ellipse (or circle) represents a rotational orientation. In this way, it is possible to normalize the orientation of the 3D volume images based on the
  • torso/abdominal shape of the fetus As the torso/abdomen is the largest portion of the fetus, it is easily measurable, thereby increasing the accuracy of the alignment.
  • the elliptical or circular form is for example obtained by performing a Hankel transform on the plane.
  • the generating of the second reference axis may comprise:
  • a second reference axis is obtained which is reliably consistent between different images.
  • Taking an average may comprise calculating a weighted average, with greater weighting in the middle of the spine than at the ends of the spine.
  • the method may further comprise:
  • the updating may again comprise an image manipulation or annotation of the image with an up/down indication.
  • the method may further comprise:
  • the scale variability between 3D fetal ultrasound images due to gestation age is reduced, and/or the intensity variation due to varying image acquisition gains and imaging conditions may be reduced.
  • the scaling and intensity normalization may take place before or after the reference axis determination and re-orientation.
  • the detecting of the spine within the 3D fetal ultrasound image for example comprises:
  • the detection of the spine takes advantages of the strengths of both methods, so that the position of the spine within the 3D fetal ultrasound image can be more robustly determined.
  • the method may comprise
  • the classifier indicates the head/toe orientation of the fetus.
  • the patch size is for example based on a fetal gestation age and a resolution of the 3D fetal ultrasound image.
  • This method provides identification of landmarks using the previous machine learning from the testing database, so that the landmarks may be located in an automated way.
  • the 3D orientation processing removes or reduces uncertainties resulting from unknown fetal positions.
  • the identified landmarks may then be used to control the generation of 2D image planes of interest, namely planes which pass through organs or other landmarks of interest, or are defined with respect to the position of such organs or other landmarks.
  • the invention also provides an ultrasound system, the system comprising: an ultrasonic probe, the ultrasonic probe comprising an array of transducer elements, wherein the ultrasonic probe is adapted to obtain a 3D fetal ultrasound image of a region of interest;
  • Figure 1 shows a representation of a fetus with the xyz referential axes allocated to the fetus as proposed in the invention
  • Figure 2 shows a randomly oriented spine and a rotated version to a defined vertical orientation
  • Figure 3 shows the spine with four xy imaging planes spaced along the spine;
  • Figure 4 shows the set of located circles or ellipses along the spine planes as well as their center points;
  • Figure 7 shows a scaling function graphically
  • Figure 9 shows a large set of 3D US abdominal acquisitions with lines to show the identified spine orientations and dots as the locations of landmarks;
  • planes are selected to pass through landmarks of interest. These landmarks are typically organs or other anatomical features such as the stomach 14, heart 16 or umbilical insertion 18.
  • the first step in the method is to detect the spine and derive a first reference (orientation) axis.
  • the spine may be automatically detected in a 3D ultrasound image by combining a morphological filter which detects elongated bright structures and a deep learning (DL) based vertebrae detector, in order to take advantage of the strengths of both methods.
  • a morphological filter which detects elongated bright structures
  • a deep learning (DL) based vertebrae detector in order to take advantage of the strengths of both methods.
  • a morphological filter may be used, for each voxel x in the US volume in a given spherical neighborhood, to compare the intensity of the voxels along a direction u with the intensity of the other voxels.
  • the filter responses are computed for various neighborhood radii and orientations u and combined to obtain a global response.
  • the global responses of neighboring voxels are aggregated to define connected components which correspond to the best filter responses.
  • the deep learning-based vertebrae detector is a 2D fully convolutional network whose input is made of 2D slices, extracted orthogonally to the original identified z- axis.
  • the volume slicing produces a large amount of data with similar features, which is appropriate for deep learning methods.
  • the network output is a down sampled probability map, with values closer to 1 where the spine might be located.
  • a 3D deep learning-based vertebrae detector is formed by stacking all the obtained 2D probability maps for one volume. This output heatmap is coarser than the morphological filter output, but more robustly located around the vertebrae.
  • the network output is refined and the filter responses that are outside the spine are rejected, so that a robust spine binary mask is finally obtained.
  • Figure 2 shows on the left a randomly oriented spine 12.
  • the spine detection basically involves identifying the spine, and using the center of mass of the spine binary mask to define the origin O of a reference coordinate system. If the detected spine is highly curved, its center of mass might not belong to the binary mask. This is because the so-called barycenter of the spine can be outside the spine itself and hence not aligned with the mask. In this case, the binary mask point that is the closest to the center of mass is used. Then the extremities 20 of the spine binary mask are used to define the vertical z axis. Alternatively a normal direction tangential to the central point of the spine may be used. The resulting z axis is shown in Figure 1.
  • abdomen detection takes place, for example using a variant of the Hough transform, tailored to the detection of circular or elliptical shapes.
  • Hough transform tailored to the detection of circular or elliptical shapes.
  • the best convolution of the image with a radially-symmetric kernel modeling a disk with the desired border profile is searched among a range of radii.
  • the x axis reference direction is then defined as an average projected vector in this reference xy plane.
  • the resulting x axis is shown in Figure 1 and it defines a second reference axis. From these two axes, the third coordinate axis, y, may be chosen to be orthogonal with right-handed orientation.
  • the average vector may comprise a weighted average, with greater weighting in the middle of the spine than at the ends of the spine.
  • a normalized orientation in 3D space may be defined.
  • random noise may be added during the xz slice extraction so that the network is fed with corrupted data during the training.
  • random patches are selected in the slice to train the classifier.
  • the patch size may be selected based on gestational age and image resolution so that all structures have a normalized size.
  • the output is binary 1 when the fetus head is at the top of the image, binary 0 if it is at the bottom.
  • Figure 7 shows this scaling function graphically.
  • the y-axis plots the scaling ratio, and the x-axis plots the gestational age ranges.
  • the points show the distances to the origin for different anatomical features (H heart, S stomach, B bladder, UV umbilical vein, Kl kidney 1, K2 kidney 2, UI umbilical insertion).
  • Figure 8 is an illustration of a database look-up-table to perform intensity normalization of ultrasound acquisitions.
  • the x-axis shows the intensity range for a particular image, with mean intensity ⁇ ; and standard deviation ⁇ ; of the pixel intensity values.
  • the database provides an intensity mapping so that each image is transformed to have a reference standard deviation ⁇ and a reference mean ⁇ .
  • the reference intensity characteristics are shown by the y-axis.
  • the 3D image volumes are rotated, translated, intensity scaled, and size scaled so that the associated fetus landmarks are perfectly aligned with respect to a common reference.
  • the method above may be used to process a single captured 3D ultrasound image or to create the training database (as already mentioned above). These two uses of the method will now be explained.
  • Figure 9 shows a large set of 3D US abdominal acquisitions.
  • the lines show the identified spine orientations.
  • Each image is annotated by a clinician to identify the locations of landmarks such as the heart and stomach, and these are shown as dots. These landmarks annotated images are used to create the training database which implements machine learning.
  • Figure 9 shows the high variability of fetus positions and orientations and confirms that learning on such a database will include variability due to spatial positioning instead of focusing on real anatomical variations.
  • the training database uses machine learning based on the annotated images in order to learn landmark (e.g. organ) positions.
  • the machine learning may be based on a random forest algorithm (for example disclosed in Criminisi, A., J., S., Konukoglu, E.:
  • a splitting criterion is defined for all nodes. It aims at finding two subsets of training points so that the sum of the entropy of both subsets is minimal.
  • the splitting criterion is obtained by testing a large set of random features at each node. Within this set, the feature that provides the optimal subset separation is selected together with the corresponding splitting threshold ( ⁇ ) and stored in the node.
  • the entropy criterion is defined as the variance of the voting vectors to each landmark.
  • a scanned image is subjected to the same orientation, scaling and intensity normalization as explained above.
  • the actual landmark localization process is restricted to the volume area located inside the abdomen convex hull. For a given input volume, the following steps are performed:
  • testing points are propagated throughout the tree, using the (f, ⁇ ) splitting and feature criteria, until they reach a leaf;
  • all predictions are combined through Gaussian estimation to convert a set of all predictions into a single extracted prediction.
  • Each clinical plane may be defined by three landmarks.
  • FIG. 11 shows methods in accordance with the invention.
  • the right side shows a method of generating the training database. It comprises:
  • step 110 receiving a training set of N 3D fetal ultrasound images. These include annotations from clinicians.
  • step 1 12 identification of landmarks within each 3D fetal ultrasound image of the training set is received.
  • each 3D fetal ultrasound image of the training set is processed using the method as described above, to provide at least 3D re-orientation.
  • step 114a This involves detecting the spine within the image in step 114a, determining the first reference axis (z) based on the spine (12) orientation and location within the image in step 114b.
  • step 114c the second reference axis (x) is determined, perpendicular to the first reference axis (z), based on the fetal torso orientation with respect to the detected spine.
  • intensity normalization and scaling is carried out in
  • step 116 machine learning is applied to the oriented training set to provide an algorithm which can determine the location of landmarks for a 3D fetal ultrasound image without corresponding identification of landmarks.
  • a particular method is used for determining the head/toe orientation of the fetus (during the step 114).
  • the plane of interest defined by the first and second reference axes is extracted, in particular the xz plane, and patches of a determined patch size are randomly sampled.
  • a classifier is then based on the patches, wherein the classifier indicates the head/toe orientation of the fetus.
  • the method comprises in step 118 receiving a 3D fetal ultrasound image. This may be in real time, but equally the method may be applied to a stored image.
  • step 120 the 3D fetal ultrasound image is re -oriented and then preferably rescaled and intensity normalized in step 121, using the method as described above.
  • steps 120a, determining the first reference axis (z) 120b and determining the second reference axis (x) in step 120c The 3D fetal ultrasound image is re- oriented using the first and second reference axes generated by the method described above.
  • the landmark positions are output in step 124.
  • step 126 the landmark positions are used to automatically define image slices to be generated from the 3D ultrasound volume.
  • the invention is for use for processing 3D fetal ultrasound images using a diagnostic imaging system.
  • the system comprises an array transducer probe 210 which has a CMUT transducer array 200 for transmitting ultrasound waves and receiving echo information.
  • the transducer array 200 may alternatively comprise piezoelectric transducers formed of materials such as PZT or PVDF.
  • the transducer array 200 is a two-dimensional array of transducers 201 capable of scanning in three dimensions for 3D imaging.
  • the transducer array 200 is coupled to a microbeamformer 212 in the probe which controls reception of signals by the CMUT array cells or piezoelectric elements.
  • Microbeamformers are capable of at least partial beamforming of the signals received by sub- arrays (or "groups” or “patches") of transducers as described in US Patents 5,997,479 (Savord et al), 6,013,032 (Savord), and 6,623,432 (Powers et al).
  • microbeamformer is entirely optional.
  • the examples below assume no analog beamforming.
  • the microbeamformer 212 is coupled by the probe cable to a transmit/receive (T/R) switch 216 which switches between transmission and reception and protects the main beamformer 220 from high energy transmit signals when a microbeamformer is not used and the transducer array is operated directly by the main system beamformer.
  • T/R transmit/receive
  • the transmission of ultrasound beams from the transducer array 210 is directed by a transducer controller 218 coupled to the microbeamformer by the T/R switch 216 and a main transmission beamformer (not shown), which receives input from the user's operation of the user interface or control panel 238.
  • One of the functions controlled by the transducer controller 218 is the direction in which beams are steered and focused. Beams may be steered straight ahead from (orthogonal to) the transducer array, or at different angles for a wider field of view.
  • the transducer controller 218 can be coupled to control a DC bias control 245 for the CMUT array.
  • the DC bias control 245 sets DC bias voltage(s) that are applied to the CMUT cells.
  • partially beamformed signals are produced by the microbeamformer 212 and are coupled to a main receive beamformer 220 where the partially beamformed signals from individual patches of transducers are combined into a fully beamformed signal.
  • the main beamformer 220 may have 128 channels, each of which receives a partially beamformed signal from a patch of dozens or hundreds of CMUT transducer cells or piezoelectric elements. In this way the signals received by thousands of transducers of a transducer array can contribute efficiently to a single beamformed signal.
  • the beamformers for transmission and for reception are implemented in different hardware and can have different functions.
  • the receiver beamformer is designed to take into account the characteristics of the transmission beamformer.
  • Figure 12 only the receiver beamformers 212, 220 are shown, for simplicity. In the complete system, there will also be a transmission chain with a transmission micro beamformer, and a main transmission beamformer.
  • the function of the micro beamformer 212 is to provide an initial combination of signals in order to decrease the number of analog signal paths. This is typically performed in the analog domain.
  • the final beamforming is done in the main beamformer 220 and is typically after digitization.
  • the transmission and reception channels use the same transducer array 210 which has a fixed frequency band. However, the bandwidth that the transmission pulses occupy can vary depending on the transmission beamforming that has been used.
  • the reception channel can capture the whole transducer bandwidth (which is the classic approach) or by using bandpass processing it can extract only the bandwidth that contains the useful information (e.g. the harmonics of the main harmonic).
  • the processed signals are coupled to a B mode (i.e. brightness mode, or 2D imaging mode) processor 226 and a Doppler processor 228.
  • the B mode processor 226 employs detection of an amplitude of the received ultrasound signal for the imaging of structures in the body such as the tissue of organs and vessels in the body.
  • B mode images of structure of the body may be formed in either the harmonic image mode or the fundamental image mode or a combination of both as described in US Pat. 6,283,919 (Roundhill et al.) and US Pat. 6,458,083 (Jago et al.)
  • the Doppler processor 228 processes temporally distinct signals from tissue movement and blood flow for the detection of the motion of substances such as the flow of blood cells in the image field.
  • the Doppler processor 228 typically includes a wall filter with parameters which may be set to pass and/or reject echoes returned from selected types of materials in the body.
  • the structural and motion signals produced by the B mode and Doppler processors are coupled to a scan converter 232 and a multi-planar reformatter 244.
  • the scan converter 232 arranges the echo signals in the spatial relationship from which they were received in a desired image format. For instance, the scan converter may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal three dimensional (3D) image.
  • the scan converter can overlay a B mode structural image with colors corresponding to motion at points in the image field with their Doppler-estimated velocities to produce a color Doppler image which depicts the motion of tissue and blood flow in the image field.
  • the multi-planar reformatter will convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasound image of that plane, as described in US Pat. 6,443,896 (Detmer).
  • a volume renderer 242 converts the echo signals of a 3D data set into a projected 3D image as viewed from a given reference point as described in US Pat. 6,530,885 (Entrekin et al).
  • the 2D or 3D images are coupled from the scan converter 232, multi-planar reformatter 244, and volume renderer 242 to an image processor 230 for further
  • the blood flow values produced by the Doppler processor 28 and tissue structure information produced by the B mode processor 226 are coupled to a quantification processor 234.
  • the quantification processor produces measures of different flow conditions such as the volume rate of blood flow as well as structural measurements such as the sizes of organs and gestational age.
  • the quantification processor may receive input from the user control panel 238, such as the point in the anatomy of an image where a measurement is to be made.
  • Output data from the quantification processor is coupled to a graphics processor 236 for the reproduction of measurement graphics and values with the image on the display 240, and for audio output from the display device 240.
  • the graphics processor 236 can also generate graphic overlays for display with the ultrasound images.
  • the image processing functions described above may for example be performed by the image processor 230.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Gynecology & Obstetrics (AREA)
  • Pregnancy & Childbirth (AREA)
  • Quality & Reliability (AREA)
  • Physiology (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
PCT/EP2018/068909 2017-07-18 2018-07-12 TREATMENT OF A FETAL ULTRASONIC IMAGE WO2019016064A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/630,919 US11341634B2 (en) 2017-07-18 2018-07-12 Fetal ultrasound image processing
CN201880047971.8A CN110945560B (zh) 2017-07-18 2018-07-12 胎儿超声图像处理
JP2020502410A JP6839328B2 (ja) 2017-07-18 2018-07-12 胎児超音波画像処理
EP18743435.2A EP3655917B1 (de) 2017-07-18 2018-07-12 Verarbeitung eines fötalen ultraschallbildes

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762533702P 2017-07-18 2017-07-18
US62/533,702 2017-07-18
EP17183432.8 2017-07-27
EP17183432.8A EP3435324A1 (de) 2017-07-27 2017-07-27 Verarbeitung eines fötalen ultraschallbildes

Publications (1)

Publication Number Publication Date
WO2019016064A1 true WO2019016064A1 (en) 2019-01-24

Family

ID=59485205

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/068909 WO2019016064A1 (en) 2017-07-18 2018-07-12 TREATMENT OF A FETAL ULTRASONIC IMAGE

Country Status (5)

Country Link
US (1) US11341634B2 (de)
EP (2) EP3435324A1 (de)
JP (1) JP6839328B2 (de)
CN (1) CN110945560B (de)
WO (1) WO2019016064A1 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022534253A (ja) * 2019-05-31 2022-07-28 コーニンクレッカ フィリップス エヌ ヴェ 誘導式超音波撮像
CN117078671A (zh) * 2023-10-13 2023-11-17 陕西秒康医疗科技有限公司 一种甲状腺超声影像智能分析系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087555B (zh) * 2017-05-12 2022-10-25 深圳迈瑞生物医疗电子股份有限公司 一种超声设备及其三维超声图像的显示变换方法、系统
KR20200099910A (ko) * 2019-02-15 2020-08-25 삼성메디슨 주식회사 초음파 영상을 표시하는 방법, 장치 및 컴퓨터 프로그램 제품
EP3928121A1 (de) * 2019-02-22 2021-12-29 Koninklijke Philips N.V. Ultraschallbildgebung mit strahlformung auf der basis von tiefenlernen sowie zugehörige vorrichtungen, systeme und verfahren
JP2022548063A (ja) * 2019-09-11 2022-11-16 ブロス,ジュリー,シー. 画像診断中に胎児内臓位を測定するための技術
CN112102244B (zh) * 2020-08-17 2024-07-12 湖南大学 胎儿超声标准切面图像检测方法、计算机设备和存储介质
US11457891B2 (en) * 2020-08-17 2022-10-04 Clarius Mobile Health Corp. Method and system for defining cut lines to generate a 3D fetal representation
JP2022189138A (ja) * 2021-06-10 2022-12-22 富士通株式会社 情報処理プログラム,情報処理装置及び情報処理方法
CN113729778A (zh) * 2021-09-02 2021-12-03 广州爱孕记信息科技有限公司 颈项透明层厚度的确定方法及装置
EP4144301A1 (de) * 2021-09-03 2023-03-08 Diagnoly Vorrichtung und verfahren zur führung einer ultraschallbeurteilung eines organs
CN113936775A (zh) * 2021-10-09 2022-01-14 西北工业大学 基于人在回路智能辅助导航的胎心超声标准切面提取方法
EP4383190A1 (de) 2022-12-09 2024-06-12 Koninklijke Philips N.V. Verarbeitung von bilddaten eines fötus

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5997479A (en) 1998-05-28 1999-12-07 Hewlett-Packard Company Phased array acoustic systems with intra-group processors
US6013032A (en) 1998-03-13 2000-01-11 Hewlett-Packard Company Beamforming methods and apparatus for three-dimensional ultrasound imaging using two-dimensional transducer array
US6283919B1 (en) 1996-11-26 2001-09-04 Atl Ultrasound Ultrasonic diagnostic imaging with blended tissue harmonic signals
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US6458083B1 (en) 1996-11-26 2002-10-01 Koninklijke Philips Electronics N.V. Ultrasonic harmonic imaging with adaptive image formation
US6530885B1 (en) 2000-03-17 2003-03-11 Atl Ultrasound, Inc. Spatially compounded three dimensional ultrasonic images
US6623432B2 (en) 2000-08-24 2003-09-23 Koninklijke Philips Electronics N.V. Ultrasonic diagnostic imaging transducer with hexagonal patches
WO2006104959A2 (en) * 2005-03-25 2006-10-05 Eastern Virginia Medical School System, method and medium for acquiring and generating standardized operator independent ultrasound images of fetal, neonatal and adult organs
US20100099987A1 (en) 2008-10-16 2010-04-22 Takuya Sasaki Ultrasonic diagnostic device, ultrasonic image processing apparatus, ultrasonic image acquiring method and ultrasonic diagnosis display method

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5208415B2 (ja) * 2003-04-16 2013-06-12 イースタン バージニア メディカル スクール 超音波画像を生成する方法、システムおよびコンピュータ・プログラム
US8556814B2 (en) 2007-10-04 2013-10-15 Siemens Medical Solutions Usa, Inc. Automated fetal measurement from three-dimensional ultrasound data
US20110125016A1 (en) * 2009-11-25 2011-05-26 Siemens Medical Solutions Usa, Inc. Fetal rendering in medical diagnostic ultrasound
KR101077752B1 (ko) * 2009-12-07 2011-10-27 삼성메디슨 주식회사 3차원 초음파 영상에 기초하여 태아의 머리 측정을 수행하는 초음파 시스템 및 방법
CN102429726A (zh) * 2011-08-03 2012-05-02 哈尔滨工业大学 基于图像导航的并联机器人辅助人工颈椎间盘置换手术定位方法
JP6037447B2 (ja) * 2012-03-12 2016-12-07 東芝メディカルシステムズ株式会社 超音波診断装置
KR101495528B1 (ko) * 2013-03-28 2015-03-02 삼성메디슨 주식회사 대상체의 방향 정보를 제공하는 초음파 시스템 및 방법
CA2921665A1 (en) 2013-09-20 2015-03-26 Transmural Biotech, S. L. Image analysis techniques for diagnosing diseases
JP2015171476A (ja) * 2014-03-12 2015-10-01 日立アロカメディカル株式会社 超音波診断装置及び超音波画像処理方法
US20160081663A1 (en) * 2014-09-18 2016-03-24 General Electric Company Method and system for automated detection and measurement of a target structure
WO2017009812A1 (en) 2015-07-15 2017-01-19 Oxford University Innovation Limited System and method for structures detection and multi-class image categorization in medical imaging
CN105433988B (zh) * 2015-12-28 2018-10-16 深圳开立生物医疗科技股份有限公司 一种目标图像识别方法、装置及其超声设备
US10540769B2 (en) * 2017-03-23 2020-01-21 General Electric Company Method and system for enhanced ultrasound image visualization by detecting and replacing acoustic shadow artifacts
CN110087551A (zh) * 2017-04-27 2019-08-02 深圳迈瑞生物医疗电子股份有限公司 一种胎心超声检测方法及超声成像系统

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6283919B1 (en) 1996-11-26 2001-09-04 Atl Ultrasound Ultrasonic diagnostic imaging with blended tissue harmonic signals
US6458083B1 (en) 1996-11-26 2002-10-01 Koninklijke Philips Electronics N.V. Ultrasonic harmonic imaging with adaptive image formation
US6013032A (en) 1998-03-13 2000-01-11 Hewlett-Packard Company Beamforming methods and apparatus for three-dimensional ultrasound imaging using two-dimensional transducer array
US5997479A (en) 1998-05-28 1999-12-07 Hewlett-Packard Company Phased array acoustic systems with intra-group processors
US6530885B1 (en) 2000-03-17 2003-03-11 Atl Ultrasound, Inc. Spatially compounded three dimensional ultrasonic images
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US6623432B2 (en) 2000-08-24 2003-09-23 Koninklijke Philips Electronics N.V. Ultrasonic diagnostic imaging transducer with hexagonal patches
WO2006104959A2 (en) * 2005-03-25 2006-10-05 Eastern Virginia Medical School System, method and medium for acquiring and generating standardized operator independent ultrasound images of fetal, neonatal and adult organs
US20100099987A1 (en) 2008-10-16 2010-04-22 Takuya Sasaki Ultrasonic diagnostic device, ultrasonic image processing apparatus, ultrasonic image acquiring method and ultrasonic diagnosis display method

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
CHEN HAO ET AL: "Fetal Abdominal Standard Plane Localization through Representation Learning with Knowledge Transfer", 14 September 2014, NETWORK AND PARALLEL COMPUTING; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CHAM, PAGE(S) 125 - 132, ISBN: 978-3-540-85989-5, ISSN: 0302-9743, XP047298873 *
CHEN HAO ET AL: "Ultrasound Standard Plane Detection Using a Composite Neural Network Framework", IEEE TRANSACTIONS ON CYBERNETICS, IEEE, PISCATAWAY, NJ, USA, vol. 47, no. 6, 1 June 2017 (2017-06-01), pages 1576 - 1586, XP011649650, ISSN: 2168-2267, [retrieved on 20170515], DOI: 10.1109/TCYB.2017.2685080 *
CRIMINISI, A., J., S.; KONUKOGLU, E.: "Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning", FOUNDATIONS AND TRENDS IN COMPUTER GRAPHICS AND VISION, 2012
CUINGNET, R.; PREVOST, R.; LESAGE, D.; COHEN, L.D.; MORY, B.; ARDON, R.: "Automatic Detection and Segmentation of Kidneys in 3D CT Images using Random Forests", PROCEEDINGS OF MICCAI'12, vol. 7512, 2012, pages 66 - 74, XP047018377
GAURIAU ROMANE ET AL: "Multi-organ Localization Combining Global-to-Local Regression and Confidence Maps", 14 September 2014, NETWORK AND PARALLEL COMPUTING; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CHAM, PAGE(S) 337 - 344, ISBN: 978-3-642-27168-7, ISSN: 0302-9743, XP047297581 *
NIE SIQING ET AL: "Automatic Detection of Standard Sagittal Plane in the First Trimester of Pregnancy Using 3-D Ultrasound Data", ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 43, no. 1, 1 January 2016 (2016-01-01), pages 286 - 300, XP029838893, ISSN: 0301-5629, DOI: 10.1016/J.ULTRASMEDBIO.2016.08.034 *
RAYNAUD CAROLINE ET AL: "Multi-organ Detection in 3D Fetal Ultrasound with Machine Learning", 9 September 2017, NETWORK AND PARALLEL COMPUTING; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CHAM, PAGE(S) 62 - 72, ISBN: 978-3-540-85989-5, ISSN: 0302-9743, XP047439844 *
YANG XIN ET AL: "Standard plane localization in ultrasound by radial component", 2014 IEEE 11TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), IEEE, 29 April 2014 (2014-04-29), pages 1180 - 1183, XP032779046, DOI: 10.1109/ISBI.2014.6868086 *
YU ZHEN ET AL: "Fetal facial standard plane recognition via very deep convolutional networks", 2016 38TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), IEEE, 16 August 2016 (2016-08-16), pages 627 - 630, XP032979232, DOI: 10.1109/EMBC.2016.7590780 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022534253A (ja) * 2019-05-31 2022-07-28 コーニンクレッカ フィリップス エヌ ヴェ 誘導式超音波撮像
JP7442548B2 (ja) 2019-05-31 2024-03-04 コーニンクレッカ フィリップス エヌ ヴェ 誘導式超音波撮像
CN117078671A (zh) * 2023-10-13 2023-11-17 陕西秒康医疗科技有限公司 一种甲状腺超声影像智能分析系统
CN117078671B (zh) * 2023-10-13 2023-12-12 陕西秒康医疗科技有限公司 一种甲状腺超声影像智能分析系统

Also Published As

Publication number Publication date
US20200234435A1 (en) 2020-07-23
JP6839328B2 (ja) 2021-03-03
JP2020527080A (ja) 2020-09-03
CN110945560A (zh) 2020-03-31
EP3655917B1 (de) 2021-02-24
CN110945560B (zh) 2023-09-01
EP3655917A1 (de) 2020-05-27
US11341634B2 (en) 2022-05-24
EP3435324A1 (de) 2019-01-30

Similar Documents

Publication Publication Date Title
EP3655917B1 (de) Verarbeitung eines fötalen ultraschallbildes
US11490877B2 (en) System and method of identifying characteristics of ultrasound images
US20110201935A1 (en) 3-d ultrasound imaging
US11931201B2 (en) Device and method for obtaining anatomical measurements from an ultrasound image
US20210345987A1 (en) Methods and systems for determining complementary ultrasound views
EP3820374B1 (de) Verfahren und systeme zur durchführung fötaler gewichtsschätzungen
EP3506832B1 (de) Ultraschalldiagnosevorrichtung
CN115996673A (zh) 用于根据超声数据来识别脉管的系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18743435

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020502410

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018743435

Country of ref document: EP

Effective date: 20200218