WO2017193251A1 - 识别超声图像中感兴趣区域轮廓的方法及系统 - Google Patents

识别超声图像中感兴趣区域轮廓的方法及系统 Download PDF

Info

Publication number
WO2017193251A1
WO2017193251A1 PCT/CN2016/081384 CN2016081384W WO2017193251A1 WO 2017193251 A1 WO2017193251 A1 WO 2017193251A1 CN 2016081384 W CN2016081384 W CN 2016081384W WO 2017193251 A1 WO2017193251 A1 WO 2017193251A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
region
ultrasound image
contour
image
Prior art date
Application number
PCT/CN2016/081384
Other languages
English (en)
French (fr)
Inventor
金蒙
王勃
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to PCT/CN2016/081384 priority Critical patent/WO2017193251A1/zh
Priority to CN201680082172.5A priority patent/CN108701354B/zh
Publication of WO2017193251A1 publication Critical patent/WO2017193251A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to the field of medical image recognition, and in particular to a method and system for identifying an outline of a region of interest in an ultrasound image.
  • Ejection Fraction refers to the percentage of stroke volume to the end-diastolic volume of the ventricle and is one of the important clinical indicators for evaluating left ventricular function.
  • the size of the ejection fraction is related to the contractility of the myocardium. The stronger the myocardial contractility, the greater the stroke volume and the greater the ejection fraction. Under normal circumstances, the left ventricular ejection fraction is >50%.
  • Measuring left ventricular ejection fraction can be accomplished by a variety of means, most of which employ methods based on medical imaging devices.
  • the typical method is: firstly, the heart image is collected by medical imaging equipment such as CT, MRI and ultrasound. After obtaining an image of a complete cardiac cycle, the left ventricular endocardium is segmented and recognized on each frame image, and then The ventricular volume is calculated from the shape of the endocardium.
  • the ventricular volume can be calculated, such as by constructing a ventricular volume curve to obtain end-diastolic volume (EDV) and end-systolic volume (ESV) for calculating the ejection fraction.
  • EDV end-diastolic volume
  • ESV end-systolic volume
  • echocardiography is a non-invasive and safe diagnostic method that does not require the injection of contrast agents, isotopes or other dyes, and the patient and the doctor are not exposed to radioactive materials.
  • the method is simple and can be repeated many times.
  • each heart chamber Performed at the bedside, each heart chamber can be examined by multi-planar, multi-directional ultrasound imaging to fully evaluate the anatomy and function of the entire heart.
  • the currently used echocardiographic modes for left ventricular ejection fraction measurement include M-Mode mode and B-Mode mode.
  • the left ventricular ejection fraction measurement method based on the M-Mode mode mainly calculates the EDV and ESV of the ventricular volume by calibrating the maximum internal diameter and the minimum inner diameter of the left ventricle by observing the imaging data of the long axis section of the left ventricle. Ejection score.
  • the left ventricular ejection fraction measurement method based on B-Mode mode is mainly to image the left ventricle, identify the left ventricular end-systolic and end-diastolic phases, and then manually calibrate the position of the endocardium, and then use the Simpson plane. The method calculates EDV and ESV, and finally calculates the ejection fraction.
  • the left ventricular ejection fraction measurement method based on M-Mode has a large dependence on the position of the scan line. It is difficult to collect the standard left ventricular intercostal long axis image for different individual heart differences, so it is difficult to obtain the standard.
  • the position of the scan line of the left internal diameter is measured, so the B-Mode mode is currently the commonly used method for measuring the ventricular ejection fraction, but in the B-Mode mode, the accuracy of segmentation and recognition of the endocardium directly determines the ventricular volume. The accuracy of the measurement.
  • the present invention provides a method and system for identifying a contour of a region of interest in an ultrasound image.
  • the accuracy of obtaining the contour of the region of interest can be improved.
  • a method of identifying an outline of a region of interest in an ultrasound image comprising:
  • a system for identifying a contour of a target of interest in an ultrasound image comprising:
  • a facet type identification module for identifying a type of a facet of a target object in an ultrasound image
  • a shape segmentation model selection module configured to select a corresponding shape segmentation model according to the slice type
  • a contour obtaining module configured to divide the model according to the selected shape, and perform segmentation processing on the region of interest in the ultrasound image to obtain a contour of the region of interest.
  • a system for identifying an outline of a region of interest in an ultrasound image comprising:
  • a transmitting circuit for transmitting an ultrasonic beam to the target object
  • a receiving circuit and a beam combining module for obtaining an ultrasonic echo signal
  • An image processing module configured to obtain an ultrasonic image according to the ultrasonic echo signal, identify a type of the cut surface of the target object in the ultrasonic image, select a corresponding shape segmentation model according to the type of the cut surface, and divide the model according to the selected shape into the ultrasonic image.
  • a region of interest is subjected to a segmentation process to obtain a contour of the region of interest in the ultrasound image;
  • a display for displaying the ultrasonic image and the contour, marking a position and a shape of the contour, and displaying the type of the surface.
  • the method and system for identifying the contour of a region of interest in an ultrasound image provided by the present invention firstly identify and classify the type of the target object (eg, the heart) in the ultrasound image, and then select a corresponding shape segmentation model for different types of the slice surface. And automatically segmenting the region of interest (eg, endocardium) in the ultrasound image to obtain the contour of the region of interest in the ultrasound image.
  • the method and system can effectively segment and recognize the difference in shape and position of the endocardium in different cardiac slice images, thereby improving the accuracy of segmentation.
  • the user can be provided when determining the type of facet and when segmenting the area of interest Confirmation and switch to manual modification mode can further improve the accuracy of segmentation.
  • FIG. 1 is a schematic structural view of an ultrasonic imaging apparatus provided by the present invention.
  • FIG. 2 is a schematic flow chart of a first embodiment of a method for recognizing an outline of a region of interest in an ultrasound image according to the present invention
  • FIG. 3 is a specific flowchart of a process of pre-establishing a correspondence between a face type and a shape segmentation model in an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a first process of constructing a shape segmentation model employed in FIG. 3;
  • FIG. 5 is a schematic diagram of a second process of constructing a shape segmentation model employed in FIG. 3;
  • FIG. 6 is a more detailed flow chart of step S10 of Figure 2;
  • FIG. 7 is a more detailed flowchart of step S100 in Figure 6;
  • FIG 8 is a detailed flow chart of one embodiment of step S14 of Figure 2;
  • Figure 9 is a schematic illustration of a process for obtaining an endocardial contour in one embodiment of Figure 8.
  • FIG 10 is a detailed flow chart of another embodiment of step S14 of Figure 2;
  • Figure 11 is a schematic illustration of a process for obtaining an endocardial contour in one embodiment of Figure 10;
  • FIG. 13 is a schematic structural diagram of an embodiment of a system for identifying a target contour of an object in an ultrasound image according to the present invention.
  • FIG. 14 is a schematic structural diagram of an embodiment of a correspondence processing module in FIG. 13; FIG.
  • FIG. 15 is a schematic structural diagram of a shape segmentation model building block of FIG. 13;
  • FIG. 16 is a schematic structural diagram of a face type identification module of FIG. 13;
  • FIG. 17 is a schematic structural diagram of a normalization processing module of FIG. 15;
  • FIG. 18 is a schematic structural view of an embodiment of the contour obtaining module of FIG. 13;
  • FIG. 19 is a schematic structural view of another embodiment of the contour obtaining module of FIG.
  • facets Due to the noise, artifacts, and structural complexity of some anatomical tissues, there are different types of facets for the corresponding facet images.
  • facets for heart ultrasound images, there are usually different types of facets, such as: There are various types of facets, such as apical two-chamber heart, apical four-chamber heart, etc.
  • the position and shape of the endocardium of the left ventricle are larger. The difference. Therefore, for cardiac ultrasound image data with multiple types of facets, if the fixed shape model is used for image segmentation operation when automatically identifying the endocardial region, image extraction errors must exist, and it is difficult to distinguish multiple heart slices in automatic recognition.
  • the difference in position and shape of the intimal endocardium makes the segmentation and recognition of the endocardium inaccurate, resulting in errors in the measurement results.
  • the related technology locates the cardiac cycle through the ECG signal, recognizes different cardiac phases through ECG signals, and then uses image segmentation technology to achieve ventricular end-systolic volume. Measurement of end-diastolic volume and calculation of ventricular ejection fraction.
  • the measurement of ventricular end-systolic volume and end-diastolic volume can be performed by segmenting and identifying the endocardium on the ultrasound images of ventricular end-systolic and end-diastolic, and then calculating the ventricular volume using a ventricular volume calculation formula such as the Simpson method.
  • FIG. 1 provides a schematic structural diagram of a system of an ultrasound image acquisition device.
  • the apparatus for performing ultrasound imaging on a target area includes: a probe 1, a transmitting circuit 2, a transmitting/receiving selection switch 3, a receiving circuit 4, a beam combining module 5, a signal processing module 6, and an image. Processing module 7 and display 8.
  • the transmitting circuit 2 transmits the delayed-focused ultrasonic pulse having a certain amplitude and polarity to the probe 1 through the transmitting/receiving selection switch 3.
  • the probe 1 is excited by the ultrasonic pulse to transmit ultrasonic waves to a target area (not shown in the figure, such as cardiac tissue) of the body to be tested, and receive ultrasonic echoes with tissue information reflected from the target area after a certain delay. And re-convert this ultrasonic echo into an electrical signal.
  • the receiving circuit receives the electrical signals generated by the conversion of the probe 1 to obtain ultrasonic echo signals, and sends the ultrasonic echo signals to the beam combining module 5.
  • the beamforming module 5 performs processing such as focus delay, weighting, and channel summation on the ultrasonic echo signals, and then sends the ultrasonic echo signals to the signal processing module 6 for related signal processing.
  • the ultrasonic echo signals processed by the signal processing module 6 are sent to the image processing module 7.
  • the image processing module 7 performs different processing on the signals according to different imaging modes required by the user, obtains image data of different modes, and then forms ultrasonic images of different modes by logarithmic compression, dynamic range adjustment, digital scan conversion, etc., such as B image, C image, D image, and the like.
  • the ultrasound image generated by the image processing module 7 is sent to the display 8 for display.
  • an ultrasound image of systolic end-systolic and end-diastolic phases can be simultaneously displayed on the display interface, and the endocardial contour can be outlined on the ultrasound image.
  • the ultrasound images of the systolic end-systolic and end-diastolic phases for simultaneous display may be standard ultrasound section images (eg, apical two-chamber heart, apical four-chamber heart, etc.), or ultrasound section images corresponding to any section selected by the user.
  • system shown in FIG. 1 further includes an operation control module 9 through which the device user can input a control command on the display interface, for example, inputting a correction contour on the ultrasound image. Mark, annotate markup text, and perform operation commands such as mode switching.
  • a method and system for identifying a contour of a region of interest in an ultrasound image is provided in an embodiment of the invention.
  • the type of the face of the target object such as the anatomical structure of at least one chamber such as the heart, the liver, etc.
  • the corresponding shape segmentation model is used for different face types to achieve Segmentation and recognition of the contour of the region of interest (eg endocardium).
  • the method and system are capable of increasing the accuracy of segmentation of the contour of a region of interest (eg, endocardium).
  • FIG. 2 is a schematic flowchart of a method for identifying an outline of a region of interest in an ultrasound image according to the present invention. As shown in the figure, the method includes:
  • the type of the face of the target object in the ultrasonic image is identified.
  • the target object may be cardiac tissue in an ultrasound image, or may also be an anatomical tissue structure of liver tissue, gallbladder tissue, stomach tissue, or the like.
  • the type of section includes a standard section of the target object in medical anatomy or ultrasound imaging, for example, a type of section for cardiac tissue including, but not limited to, a four-chamber heart, a two-chamber heart, and the like.
  • the above-mentioned aspect type is not limited to the standard cut surface, and may also include a custom cut surface type.
  • the custom cut surface type may be an ultrasonic cut surface image obtained by the user selecting an arbitrary direction to cut the target object.
  • the ultrasound image herein can be obtained, but not limited to, using only the system shown in Figure 1 above.
  • Step S12 selecting a corresponding shape segmentation model according to the above-mentioned slice type.
  • a shape segmentation model corresponding to different slice types stored in advance is extracted.
  • a shape segmentation model corresponding to the same region of interest in one-to-one correspondence with the facet type may be pre-established and stored.
  • the cardiac tissue target two shape-divided models corresponding to the four-chamber heart and the two-chamber standard cut surface of the ventricle are respectively stored in the system; for the stomach tissue target, the cut surface of the stomach cavity along the stomach length and the stomach are respectively stored in the system
  • the radial sections correspond to the elliptical and circular segmentation models, respectively.
  • Step S14 Perform segmentation processing on the region of interest in the ultrasound image according to the selected shape segmentation model to obtain an outline of the region of interest in the ultrasound image.
  • the region of interest may be the endocardium in the ultrasound image.
  • step S16 the contour is displayed in the ultrasonic image, and the position and shape of the contour are marked. Further, the current aspect type can also be displayed on the display interface or the ultrasound image.
  • the method further includes the following steps: performing a validity check on the obtained contour to prompt the user whether the contour is incorrect, and prompting the user to select an intervention image segmentation process, thereby improving the image segmentation process.
  • the accuracy of the device enhances the experience of the device.
  • At least one of the following methods may be used to verify the obtained profile obtained above:
  • the validity of the above contour is examined based on the positional relationship between the region of interest and other tissues in the anatomical structure. For example, in a cardiac ultrasound image, the positional relationship between the contour segmentation position of the endocardium and the position of the ventricle, the positional division position of the endocardium and the positional relationship of the mitral or tricuspid valve satisfy the correct anatomical relationship, etc. Wait.
  • the validity of the above profile is tested based on the parameter indices of the region of interest in the anatomical structure. For example, in a cardiac ultrasound image, the volume of the chamber surrounded by the endocardial contour is within the range of normal physiological indicators, and if it is exceeded, the division is invalid.
  • the validity of the contour is checked based on the contour variability index of the region of interest when the region of interest in the ultrasound image is segmented as described above.
  • the contour variability index mentioned here may be that the segmentation processing result is compared with the translation coefficient, the rotation coefficient, the scaling factor, and the weighting coefficient of the feature component of the shape segmentation model, and whether the contour variability index meets a certain threshold interval is determined. If a certain threshold interval is met, it is determined that the current segmentation result is valid, and if a certain threshold interval is exceeded, it is determined that the current segmentation result is invalid.
  • the same shape segmentation model is used to segment the region of interest in the multi-frame ultrasound image of the same slice type, and the validity of the contour is verified based on the consistency judgment of the segmentation processing result of the multi-frame ultrasound image.
  • the shape mutation can detect the change of the contour point position, and the contour area, the circumference. Determine the change of parameters. In the case where it is determined that the current segmentation result is invalid, the user may be prompted to check the result.
  • the above method further includes the following steps: obtaining a correspondence between the aspect type and the shape segmentation model:
  • a correspondence relationship between different segment types of the same target object for example, cardiac tissue
  • a shape segmentation model corresponding to the same region of interest for example, cardiac endocardium
  • the corresponding shape is selected according to the above-described segment type.
  • the corresponding shape segmentation model is obtained based on the above-described correspondence relationship according to the above-described slice type.
  • FIG. 3 a specific flowchart of a process of pre-establishing a correspondence between a face type and a shape segmentation model in one embodiment of the present invention is shown.
  • the process includes the following steps:
  • Step S20 marking a contour curve of the region of interest on each training image in the training image set of the same target object
  • Step S22 discretizing the contour curve of the above-mentioned region of interest into landmark points for describing the shape of the region of interest;
  • Step S24 obtaining landmark points on the training image for describing the shape of the region of interest
  • Step S26 constructing different shape segmentation models according to different slice types according to the corresponding landmark points and corresponding face types on each of the training images.
  • step S26 can be implemented by the following two image segmentation methods:
  • all the landmark points are arranged in the same coordinate system in order, and the point distribution model of the contour curve of the region of interest under the current section type is obtained, and the feature analysis and feature extraction are performed on the point distribution model to obtain a sense of description.
  • the average shape and the feature component of the contour of the region of interest obtain a shape segmentation model for image segmentation based on the average shape and the feature component described above, thereby acquiring a correspondence relationship between the slice type and the shape segmentation model of the region of interest.
  • the corresponding landmark points on each training image in the training image set corresponding to different aspect types are input into a plurality of machine learning models for sample training, and network parameters of the plurality of machine learning models are obtained, based on The known network parameters acquire the correspondence between different aspect types and multiple machine learning models, thereby obtaining the correspondence between the aspect type and the shape segmentation model of the region of interest.
  • the shape segmentation model may be: performing feature analysis and feature extraction on the contour curve of the region of interest based on the slice image of the known slice type, and obtaining an average shape and a feature component for describing the contour of the region of interest; or At least one machine learning model obtained according to a deep learning method, such as a convolutional neural network CNN, based on a training image set of a known cut surface type and a contour curve of the region of interest.
  • a type of facet can correspond to a machine learning model (such as convolutional neural network CNN).
  • FIG. 4 it is a schematic diagram of the first process of constructing a shape segmentation model adopted in FIG. 3;
  • the shape and position of the endocardium are first manually labeled by the physician or professional on the training image (401) (402), and then the manually calibrated endocardial curve is discrete.
  • a plurality of landmark points (403) describing the shape of the endocardium are performed, and the above operations are performed on the images in the training image set to obtain landmark points describing the position and shape of the endocardium on each image.
  • different endocardial models are constructed according to different types of facets; specifically, the method of constructing the shape segmentation model can be implemented by using an Active Shape Model (ASM).
  • ASM Active Shape Model
  • a point distribution model of the endocardial curve of the current cardiac section type is obtained by arranging all the landmark points in the same coordinate system in order (404).
  • Feature analysis and feature extraction for the point distribution model can be used to describe the average shape and several feature components of the endocardial shape, and the average shape and the feature components can be weighted and summed to obtain different shapes. Weighting average shapes and feature components and scaling, panning, and rotating
  • the position and shape of the endocardium can be generated at any position in the image, that is, a shape segmentation model is formed.
  • the shape segmentation model can be described by an average shape, several feature components, and weighting coefficients, translation coefficients, rotation coefficients, and scaling factors of these components.
  • FIG. 5 it is a schematic diagram of a second process of constructing a shape segmentation model adopted in FIG. 3;
  • the landmark point After obtaining the landmark point as in FIG. 3, it can be implemented by an emerging deep learning method such as Convolutional Neural Network (CNN).
  • CNN Convolutional Neural Network
  • a mapping relationship between the input image and the endocardial contour can be established by constructing a multi-layer neural network to establish a correspondence between the shape of the slice and the shape segmentation model of the region of interest (endocardial contour).
  • the input layer is a single-frame two-dimensional image of the heart image
  • the output layer is a two-dimensional coordinate of a point corresponding to the endocardial contour of the two-dimensional image.
  • the intermediate layer may have multiple, which accepts the output of the upper layer node and serves as The input of the next level node.
  • the middle layer can have different types. The most common one is the full connection (full connection, which means that each node of the adjacent layer is connected to all nodes of the next layer).
  • a volume is used in the convolutional neural network. Convolutions, which is mainly used in the field of image recognition, is characterized by the feature extraction of image layer-by-layer convolution using convolution kernels. After determining the network structure, a set of training samples (including multiple sets of two-dimensional cardiac images and their corresponding endocardial contours) can be used to train the network to obtain parameters (ie, correspondences) of the network. After the training is completed, a new two-dimensional image of the heart is given and input into the neural network model, and the obtained output is the coordinates of the endocardial contour points corresponding to the image of the frame.
  • the step of identifying the type of the cut surface of the target object in the ultrasonic image and selecting the corresponding shape splitting model according to the type of the cut surface further includes:
  • the receiving user selects a selection instruction for selecting an image segmentation mode to determine an image segmentation mode
  • the method before the step of selecting the corresponding shape segmentation model according to the slice type and the image segmentation mode, the method further includes:
  • step D according to the slice type and the image segmentation mode, a corresponding shape segmentation model is obtained based on the mapping relationship.
  • the order in which the above various steps are described is not exclusive, and the present invention is not limited to the order.
  • the image segmentation method mentioned in this embodiment may be the following two modes mentioned above:
  • At least one machine learning model obtained according to the deep learning method is used as a shape segmentation model, and in one example, the machine learning model may be a convolutional neural network CNN.
  • a type of facet can correspond to a machine learning model (such as convolutional neural network CNN).
  • image segmentation modes can be obtained by adjusting the image base parameters, thereby obtaining corresponding precisions or Shape segmentation model for indicators such as accuracy.
  • This mode of operation can be provided to the user as a research for image processing to distinguish between effects and image quality in different back-end image processing modes.
  • step S10 further includes:
  • Step S100 acquiring a plurality of frames of the ultrasound image to be segmented in the single frame of the ultrasound image to be segmented or the heart movie file, and then normalizing the ultrasound image to be segmented;
  • Step S101 Mapping the ultrasound image to be segmented after the normalization processing to the feature space, where the feature space is constructed by extracting features in the training set image;
  • Step S102 comparing the projection of the ultrasound image to be segmented in the feature space with the projection of the training image in the feature space, and determining the type of the ultrasound image to be segmented.
  • step S100 further includes:
  • step S103 the position of the specific target in the ultrasound image to be segmented is identified; for example, for the cardiac ultrasound image, the interventricular septum may be considered as a specific target.
  • Step S104 rotating the ultrasound image to be segmented according to the position of the specific target, so that the long axis direction of the main cavity of the region of interest in the ultrasound image to be segmented is vertical.
  • the left ventricle is used as the main chamber of the region of interest.
  • Step S105 translating the ultrasound image to be segmented, and adjusting the position of the main chamber in the ultrasound image to be segmented to the center of the image.
  • step S106 the gray mean value and the variance of the ultrasound image to be segmented are unified.
  • step S10 After identifying the type of the face of the target object in the ultrasound image in step S10, the following steps may also be included:
  • Receiving an instruction that the user thinks that the aspect type of the ultrasound image to be segmented is incorrectly determined, and switches to a mode in which the user can customize the type of the modification surface.
  • step S14 of Figure 1 of the present invention.
  • the step of segmenting the region of interest in the ultrasound image according to the selected shape segmentation process to obtain the contour of the region of interest in the ultrasound image further includes:
  • Step S140 according to the selected shape segmentation model, generating an initial curve describing the contour position and shape of the region of interest in the ultrasound image to be segmented;
  • Step S141 searching for a point at which the gradation value of the at least two images has the largest change in the vicinity of the initial curve, and calculating a translation coefficient in the shape segmentation model according to the positional relationship between the point where the gradation value of the image near the initial curve changes greatly and the initial curve. a rotation coefficient, a scaling factor, and a weighting coefficient of the feature component;
  • Step S142 updating the shape and position of the initial curve according to the calculated weighting coefficient to obtain a new curve
  • Step S143 determining whether the matching degree of the new curve between the points where the gradation value of the image near the curve is the largest changes satisfies the preset condition. If the preset condition is met, then in step S144, determining the new curve as the above feeling If the preset region is not satisfied, the new curve is used as the initial curve in step S155, the flow proceeds to step S141, and the subsequent steps are repeated; wherein the preset condition may be Such as the number of iterations, the error threshold, and so on.
  • FIG. 9 a schematic diagram of a process for obtaining an endocardial contour in one embodiment is shown; wherein an initial curve describing the position and shape of the endocardium is first generated in the ultrasound image (601) (603) Then, in the vicinity of the initial curve (603), find a point where the gray value of the image changes the most (602), and calculate the translation coefficient in the shape segmentation model according to the positional relationship between the point where the gray value of the image near the initial curve changes the most and the initial curve.
  • a rotation coefficient, a scaling factor, and a weighting coefficient of the feature component and then updating the shape and position of the initial curve based on the calculated coefficient, thereby obtaining a new curve (605) in the new ultrasound image (604), and then in the new Near the curve, find some points (606) that vary according to the gray value of the image. Recalculate the translation coefficient, rotation coefficient, scaling factor and characteristics according to the positional relationship between the point where the gray value of the image near the new curve changes the most and the new curve.
  • the point at which the gray value of the plurality of images is most changed in the vicinity of the curve may be a point in which the gray level change is found to be the largest in the normal direction of the curve;
  • the method of calculating the translation coefficient, the rotation coefficient, the scaling factor, and the weighting coefficient of the feature component of the shape segmentation model according to the point where the gray value of the image near the curve changes the most is the largest change from the gray value near the curve.
  • the position of the point, the average shape in the shape segmentation model is placed in the image, so that the distance between the point where the gray value changes the most and the average shape is the smallest, and the translation can be calculated according to the positional relationship and the size relationship between the average shape and the curve.
  • the weighting coefficients of the feature components in the shape segmentation model can be calculated, and then these coefficients are brought into In the shape segmentation model, a new curve can be obtained, from which it can be seen that the new curve may differ from the previous curve in position, size, and shape.
  • step S14 further includes:
  • Step S146 inputting the above ultrasound image to the convolutional neural network CNN, and convolving the ultrasound image to be segmented by the convolution kernel layer by layer, and extracting features;
  • Step S147 estimating the extracted feature according to the selected shape segmentation model, and obtaining the shape and position of the contour of the region of interest in the ultrasound image. See FIG. 11 and finally outputting a marked endocardium. Ultrasound image of the outline.
  • step S111 and step S112 are inserted in the process of performing step S10 to step S16 in Fig. 2 described above.
  • step S111 is performed to display the recognition result of the cut surface type for confirmation by the user.
  • step S112 is performed: the manual input mode is switched, and the user modifies the current type of the cut surface.
  • the shape division model corresponding to the above-described section type is selected (step S12).
  • step S151 After performing step S14 (ie, segmenting the model according to the selected shape, performing segmentation processing on the region of interest in the ultrasound image to obtain the contour of the region of interest in the ultrasound image), performing step S151: obtaining the obtained The contour is checked for validity, and the user is prompted whether the contour is incorrect. If the contour segmentation is invalid, step S152 is performed: the manual segmentation image mode is switched, the user manually inputs the contour of the region of interest on the ultrasound image, and then the ultrasound image is further used. The above outline is displayed, and the position and shape of the above outline are marked (step S16). Otherwise, it proceeds directly to step S16.
  • the present invention also provides a system for identifying a contour of an object of interest in an ultrasound image.
  • a schematic structural diagram of one embodiment of a system for identifying a contour of an object of interest in an ultrasound image is provided.
  • the system includes:
  • the facet type identification module 11 is configured to identify a type of facet of the target object in the ultrasound image; the facet type includes a standard cut surface of the target object in medical anatomy or ultrasound imaging.
  • a shape segmentation model selection module 12 configured to select a corresponding shape segmentation model according to the above-described slice type
  • the contour obtaining module 13 is configured to segment the model according to the selected shape, and perform segmentation processing on the region of interest in the ultrasound image to obtain a contour of the region of interest in the ultrasound image;
  • a display marker module 14 is provided for displaying the contour in the ultrasound image, marking the location and shape of the contour, and displaying the aspect type.
  • the above system also includes:
  • the contour checking module 16 is configured to perform a validity check on the obtained contour to prompt the user whether the contour is incorrect;
  • the manual mode switching module 17 is configured to switch to the manual split image mode when the contour check module prompts the user that the contour is incorrect.
  • the contour verification module performs validity verification on the obtained contour based on at least one of the following factors:
  • the same shape segmentation model is used to segment the region of interest in the multi-frame ultrasound image of the same slice type, and the validity of the contour is tested based on the consistency judgment of the segmentation process result of the multi-frame ultrasound image.
  • the above system also includes:
  • Corresponding relationship processing module 15 is configured to acquire a correspondence between different slice types of the same target object and a shape segmentation model corresponding to the same region of interest, and store the relationship;
  • the shape segmentation model selection module 12 obtains the corresponding shape segmentation model based on the above-described correspondence relationship according to the above-described slice type.
  • the foregoing correspondence processing module 15 includes:
  • a training marking module for marking a contour curve of the region of interest on each training image in the training image set of the same target object
  • a discrete module 150 configured to discretize the contour curve of the region of interest into a landmark point for describing a shape of the region of interest
  • a landmark point obtaining module 152 is configured to obtain an interest for describing each of the above training images Landmark point of the shape of the area;
  • the shape segmentation model building module 153 is configured to construct different shape segmentation models according to different slice types according to the corresponding landmark points and corresponding slice types on each of the training images.
  • the shape segmentation model building module 153 includes:
  • the first building module 154 is configured to arrange all the landmark points in the same coordinate system in order, obtain a point distribution model of the contour curve of the region of interest under the current section type, and perform feature analysis and feature extraction on the point distribution model. Obtaining an average shape and a feature component for describing a contour of the region of interest, obtaining a shape segmentation model for image segmentation based on the average shape and the feature component, thereby acquiring a correspondence relationship between the slice type and the shape segmentation model of the region of interest; or,
  • the second constructing module 152 is configured to input the corresponding landmark points on each training image in the training image set corresponding to different cut surface types into a plurality of machine learning models for sample training based on the depth learning method, and obtain multiple machine learning
  • the network parameter of the model obtains the correspondence between different face types and multiple machine learning models based on the known network parameters, thereby obtaining the correspondence between the face type and the shape segmentation model of the region of interest.
  • the above-mentioned aspect type identification module 11 includes:
  • the normalization processing module 110 is configured to acquire a single frame of the ultrasound image to be segmented or a plurality of frames of the tomographic image in the heart movie file, and then normalize the ultrasound image to be segmented;
  • the feature space mapping module 111 is configured to map the ultrasound image to be segmented after the normalization processing to the feature space, and the feature space is constructed by extracting features in the training set image;
  • a comparison determining module 112 configured to compare a projection of the ultrasound image to be segmented in the feature space with a projection of the training image in the feature space, and determine a type of the slice of the ultrasound image to be segmented;
  • the recognition result confirmation module 113 is configured to display the recognition result of the above-mentioned aspect type for the user to confirm;
  • the switching module 114 is configured to switch to a mode in which the user can customize the modified aspect type upon receiving an instruction that the user considers that the aspect type of the ultrasound image to be segmented is incorrect.
  • FIG. 17 a specific structural diagram of the normalization processing module 110 of FIG. 16 is shown; wherein the normalization processing module 110 includes:
  • a location identifying module 115 configured to identify a location of a specific target in the ultrasound image to be segmented
  • the rotation processing module 116 is configured to rotate the ultrasound image to be segmented according to the position of the specific target, so that the long axis direction of the main cavity of the region of interest in the ultrasound image to be segmented is vertical;
  • a translation processing module 117 configured to translate the ultrasound image to be segmented, and adjust the position of the main chamber in the ultrasound image to be segmented to a center of the image;
  • the unified processing module 118 performs unified processing on the gray mean and variance of the ultrasound image to be segmented.
  • the contour obtaining module 13 includes:
  • the initial curve generating module 130 is configured to generate an initial curve describing the position and shape of the contour of the region of interest in the ultrasound image to be segmented according to the selected shape segmentation model;
  • a feature point finding module 131 configured to search for a point at which the at least two gray level values of the image change the most in the vicinity of the initial curve
  • the weighting coefficient calculation unit 132 is configured to calculate the translation coefficient, the rotation coefficient, the scaling factor, and the feature component in the endocardial model according to the positional relationship between the point where the image gradation value obtained by the feature point finding module 131 is the largest and the initial curve.
  • a curve adjusting unit 134 configured to update a shape and a position of the initial curve according to the weighting coefficient calculated by the weighting coefficient calculating unit, to obtain a new curve
  • the curve determination processing unit 135 is configured to determine whether the matching degree of the new curve between the points where the gradation value of the image near the curve is the largest changes satisfies a preset condition, and if the preset condition is met, determining the new curve as the above feeling The contour of the region of interest in the above ultrasound image; if the preset condition is not satisfied, the new curve is taken as the initial curve, and the feature point finding module 131 is notified.
  • the contour obtaining module 13 includes:
  • a convolution processing unit 136 configured to input the above-mentioned ultrasound image to the convolutional neural network CNN, and use a convolution kernel to perform layer-by-layer convolution of the ultrasound image to be segmented to extract features;
  • the estimation processing unit 137 is configured to estimate the feature extracted by the convolution processing unit according to the selected shape segmentation model, and obtain the shape and position of the contour of the region of interest in the ultrasound image.
  • a system for identifying an outline of a region of interest in an ultrasound image comprising:
  • a transmitting circuit for transmitting an ultrasonic beam to the target object
  • a receiving circuit and a beam combining module for obtaining an ultrasonic echo signal
  • An image processing module configured to obtain an ultrasonic image according to the ultrasonic echo signal, identify a type of the cut surface of the target object in the ultrasonic image, select a corresponding shape segmentation model according to the type of the cut surface, and divide the model according to the selected shape into the ultrasonic image.
  • a region of interest is subjected to a segmentation process to obtain a contour of the region of interest in the ultrasound image;
  • a display for displaying the above-mentioned ultrasonic image and the contour, marking the position and shape of the contour Shape, and display the above type of facet.
  • the image processing module performs the various steps in FIG. 2, and the details are not repeated here.
  • the image processing module mentioned herein may be constructed by one processor or multiple processors.
  • the target object is a heart and the region of interest is a cardiac endocardium.
  • the system further includes: a storage module, configured to store a correspondence between different slice types of the same target object and a shape segmentation model corresponding to the same region of interest.
  • the storage module herein may employ a memory chip or a collection of multiple memory chips.
  • the display prompts the user whether the contour is incorrect
  • the system further includes: an operation control module for receiving a user input control command
  • the image processing module switches to the manual segmentation image mode, and the user can draw the contour of the region of interest on the ultrasound image by operating the control module.
  • the display display the identification result of the type of the facet for confirmation by the user, and the system further includes:
  • An operation control module for receiving a user input control command
  • the image processing module switches the manual input mode, and modifies the current aspect type according to the control command input by the user and displays it.
  • the shape segmentation model is: performing feature analysis and feature extraction on a contour curve of a region of interest based on a slice image of a known slice type, and obtaining an average of the contours of the region of interest obtained.
  • Shape and feature components or at least one machine learning model obtained from the depth learning method based on a training image set of known cut surface types and region of interest contour curves.
  • the image processing module is configured to perform segmentation processing on the region of interest in the ultrasound image according to the selected shape segmentation model to obtain a contour of the region of interest in the ultrasound image.
  • the method further includes: performing a validity check on the obtained contour, and prompting the user on the display whether the contour is incorrect.
  • the process of verifying the validity of the obtained contour by the image processing module includes at least one of the following:
  • the same shape segmentation model is used to segment the region of interest in the multi-frame ultrasound image of the same slice type, and the validity of the contour is tested based on the consistency judgment of the segmentation process result of the multi-frame ultrasound image.
  • the image processing module identifies the type of the face of the target object in the ultrasound image in the following manner:
  • the projection of the ultrasound image to be segmented in the feature space is compared with the projection of the training image in the feature space, and the type of the ultrasound image to be segmented is determined.
  • the method and system for identifying the contour of a region of interest in an ultrasound image provided by the present invention firstly identify and classify the type of the target object (eg, the heart) in the ultrasound image, and then select a corresponding shape segmentation model for different types of the slice surface. And automatically segmenting the region of interest (eg, endocardium) in the ultrasound image to obtain the contour of the region of interest in the ultrasound image.
  • the method and system can effectively segment and recognize the difference in shape and position of the endocardium in different cardiac slice images, thereby improving the accuracy of segmentation.
  • the user when determining the type of the facet, and when dividing the region of interest, the user can be provided to confirm and switch to the manual modification mode, which can further improve the accuracy of the segmentation.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

一种识别超声图像中感兴趣区域轮廓的方法以及相应的系统,该方法包括:识别超声图像中目标物体的切面类型(S10);根据所述切面类型选择对应的形状分割模型(S12);根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处理,获得所述感兴趣区域的轮廓(S14)。实施该方法和系统,可以提升感兴趣区域(心内膜)的轮廓的分割的准确率。

Description

识别超声图像中感兴趣区域轮廓的方法及系统 技术领域
本发明涉及医学图像识别领域,特别是涉及一种识别超声图像中感兴趣区域轮廓的方法及系统。
背景技术
射血分数(Ejection Fraction,EF)是指每搏输出量占心室舒张末期容积量的百分比,是评价左心室功能的重要临床指标之一。射血分数的大小与心肌的收缩能力有关,心肌收缩能力越强,则每搏输出量越多,射血分数越大,正常情况下左室射血分数为>50%。
测量左室射血分数可以通过多种手段,其中大多数都是采用基于医学成像设备的方法。典型的方法是:首先通过医学影像设备例如CT、MRI和超声对心脏图像进行采集,在获得一个完整心动周期的图像后,在每一帧图像上对左心室心内膜进行分割和识别,然后根据心内膜的形状计算心室容积。在计算心室容积时可以采用很多方式,例如通过构建心室容积曲线来获得舒张末期容积(EDV)和收缩末期容积(ESV)用以计算射血分数。
在上述医学成像设备中,超声心动图检查是一种无创安全的诊断方法,不需要注射造影剂、同位素或其它染料,病人和医生不受放射性物质辐射,方法简便、可多次重复、可在床旁进行,通过多平面、多方位超声成像可对每个心腔检查,完整评价整个心脏的解剖结构和功能。
目前常用的左心室射血分数测量的超声心动图模式有M-Mode模式、B-Mode模式。基于M-Mode模式的左心室射血分数测量方法,主要是通过对斜切左心室肋间长轴切面的成像数据,通过标定左心室最大内径和最小内径获得心室容积的EDV和ESV,从而计算射血分数。基于B-Mode模式的左心室射血分数测量方法,主要是通过对左心室进行成像,并对左心室收缩末期和舒张末期进行识别,然后手动对心内膜的位置进行标定,再利用Simpson平面法计算EDV和ESV,最后完成对射血分数的计算。
但无论是M-Mode模式还是B-Mode模式,都存在技术上的缺陷。基于M-Mode的左心室射血分数测量方法对扫描线的位置有较大的依赖,对于不同个体的心脏的差异,很难采集到标准的左室肋间长轴图像,因而很难获得标准的测量左室内径的扫描线位置,因此B-Mode模式是目前常用的心室射血分数测量方法,但是在B-Mode模式下,对心内膜的分割和识别的准确性直接决定了心室容积测量的准确性。
发明内容
针对上述问题,本发明提出一种识别超声图像中感兴趣区域轮廓的方法及系统。可以提升获得感兴趣区域轮廓的准确率。
作为本发明的一方面,提供了一种识别超声图像中感兴趣区域轮廓的方法,其中,包括:
识别超声图像中目标物体的切面类型;
根据所述切面类型选择对应的形状分割模型;
根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处理,获得所述感兴趣区域的轮廓。
相应地,作为本发明的另一方面,还提供了一种在超声图像中识别感兴趣目标轮廓的系统,其中,包括:
切面类型识别模块,用于识别超声图像中目标物体的切面类型;
形状分割模型选择模块,用于根据所述切面类型选择对应的形状分割模型;
轮廓获得模块,用于根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处理,获得所述感兴趣区域的轮廓。
在本发明的其中一个实施例中,还提供了一种识别超声图像中感兴趣区域轮廓的系统,其包括:
探头;
发射电路,用于向目标物体发射超声波束;
接收电路和波束合成模块,用于获得超声回波信号;
图像处理模块,用于根据超声回波信号获得超声图像,识别上述超声图像中目标物体的切面类型,根据上述切面类型选择对应的形状分割模型,根据选定的形状分割模型,对上述超声图像中的感兴趣区域进行分割处理,获得上述感兴趣区域在上述超声图像中的轮廓;及
显示器,用于显示上述超声图像及上述轮廓、标记上述轮廓的位置和形状,以及显示上述切面类型。
本发明提供的识别超声图像中感兴趣区域轮廓的方法及系统,首先对需要超声图像中目标物体(如:心脏)的切面类型进行识别和分类,然后针对不同的切面类型选择对应的形状分割模型,并自动将超声图像中的感兴趣区域(如:心内膜)进行分割处理,获得感兴趣区域在超声图像中的轮廓。在实际应用中,该方法和系统能够针对不同心脏切面图像中的心内膜的形状和位置的差异进行有效地分割和识别,提升了分割的准确率。
同时,在确定切面类型时,以及在分割感兴趣区域时,可以提供使用者 进行确认,并切换至手工修改模式,可以进一步提高分割的准确率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明提供的一种超声成像设备的结构示意图。
图2为本发明提供的一种识别超声图像中感兴趣区域轮廓的方法的第一实施例流程示意图;
图3为本发明一个实施例中预先建立切面类型与形状分割模型之间对应关系的过程的具体流程图;
图4为图3中采用的第一种构建形状分割模型过程的示意图;
图5为图3中采用的第二种构建形状分割模型过程的示意图;
图6为图2中步骤S10的更详细的流程图;
图7为图6中步骤S100的更详细的流程图;
图8为图2中步骤S14的一个实施例的详细流程图;
图9为图8的一个实施例中获得心内膜轮廓的过程示意图;
图10为图2中步骤S14的另一个实施例的详细流程图;
图11为图10的一个实施例中获得心内膜轮廓的过程示意图;
图12为本发明的其中一个实施例的流程示意图;
图13为本发明提供的一种在超声图像中识别感兴趣目标轮廓的系统的一个实施例的结构示意图;
图14为图13中对应关系处理模块的一个实施例的结构示意图;
图15为图13中形状分割模型构建模块的结构示意图;
图16为图13中切面类型识别模块的具体结构示意图;
图17为图15中归一化处理模块的具体结构示意图;
图18为图13中轮廓获得模块的一个实施例的结构示意图;
图19为图13中轮廓获得模块的另一个实施例的结构示意图。
具体实施方式
由于超声图像中存在的噪声、伪影,以及某些解剖学组织的结构复杂性,其所对应的切面图像存在不同的切面类型,例如,针对心脏超声图像,通常存在不同的切面类型,如:心尖两腔心、心尖四腔心等各种切面类型,在不同的心脏切面类型中,左心室的心内膜的位置和形状都存在较大 的差异。因此对于存在多种切面类型的心脏超声图像数据在进行自动识别心内膜区域时如果采用固定的形状模型进行图像分割运算,必然存在图像提取误差,以及很难在自动识别中兼顾多种心脏切面间心内膜位置和形状的差异,从而对心内膜的分割和识别不准确,导致测量结果出现误差。
目前已经实现对心室射血分数的全自动测量,相关技术通过心电信号对心动周期进行定位,通过心电信号对不同的心脏时相进行识别,然后利用图像分割技术实现对心室收缩末期容积、舒张末期容积的测量和心室射血分数的计算。对心室收缩末期容积、舒张末期容积的测量可以通过在心室收缩末期和舒张末期的超声图像上对心内膜进行分割和识别,然后利用心室容积计算公式,如:Simpson法,来计算心室容积。由此可知,对于B-Mode模式来说,对心内膜的分割和识别的准确性直接决定了心室容积测量的准确性。因此,实现准确的对心内膜的分割和识别对提升系统测量的准确性至关重要。
图1提供了一种超声图像采集设备的系统结构示意图。本文以获取心脏超声为例对系统结构进行详细说明。如图1所示,本发明实施例的对目标区域进行超声成像的装置包括:探头1、发射电路2、发射/接收选择开关3、接收电路4、波束合成模块5、信号处理模块6、图像处理模块7和显示器8。发射电路2将经过延迟聚焦的具有一定幅度和极性的超声脉冲通过发射/接收选择开关3发送到探头1。探头1受超声脉冲的激励,向受测机体组织的目标区域(图中未示出,例如心脏组织)发射超声波,经一定延时后接收从目标区域反射回来的带有组织信息的超声回波,并将此超声回波重新转换为电信号。接收电路接收探头1转换生成的电信号,获得超声回波信号,并将这些超声回波信号送入波束合成模块5。波束合成模块5对超声回波信号进行聚焦延时、加权和通道求和等处理,然后将超声回波信号送入信号处理模块6进行相关的信号处理。经过信号处理模块6处理的超声回波信号送入图像处理模块7。图像处理模块7根据用户所需成像模式的不同,对信号进行不同的处理,获得不同模式的图像数据,然后经对数压缩、动态范围调整、数字扫描变换等处理形成不同模式的超声图像,如B图像,C图像,D图像等等。图像处理模块7生成的超声图像送入显示器8进行显示,例如,在显示界面上可以同步显示心室收缩末期和舒张末期的超声图像,并且在超声图像上可以勾勒心内膜轮廓。对于同步显示的心室收缩末期和舒张末期的超声图像可以是标准的超声切面图像(如心尖两腔心、心尖四腔心等),或者是用户选择的任意切面所对应的超声切面图像。
此外,图1所示的系统中还包括操作控制模块9,设备使用者通过操作控制模块9可在显示界面上输入控制命令,例如在超声图像上输入修正轮廓 标记、批注标记文本、进行模式切换等操作指令。
基于上述系统结构,在本发明的实施例中提供了一种识别超声图像中感兴趣区域轮廓的方法及系统。首先对需要分割的超声图像中目标物体(例如心脏、肝脏等具有至少一个腔室的解剖学组织结构)的切面类型进行识别和分类,然后针对不同的切面类型采用对应的形状分割模型来实现对感兴趣区域(如:心内膜)的轮廓的分割和识别。该方法和系统能够提升感兴趣区域(如:心内膜)的轮廓的分割的准确率。
参见图2,为本发明提供的一种识别超声图像中感兴趣区域轮廓的方法的流程示意图,如图所示,该方法包括:
步骤S10,识别超声图像中目标物体的切面类型。在本发明的其中一个实施例中,目标物体可以为超声图像中的心脏组织,或者还可以为肝脏组织、胆囊组织、胃组织等等的解剖学组织结构。在本发明的其中一个实施例中,上述切面类型包括医学解剖学或超声成像上对目标物体的标准切面,例如,针对心脏组织包括但不限于:四腔心、两腔心等切面类型。当然,上述切面类型也不限于标准切面,还可以包括自定义切面类型,例如,自定义切面类型可以是用户选择任意方向对目标物体进行剖切后获得的超声切面图像。这里的超声图像可以但不限于只采用上述图1所示的系统获得。
步骤S12,根据上述切面类型选择对应的形状分割模型。提取预先存储的不同切面类型对应的形状分割模型。针对同一目标物体的不同切面类型,可以预先建立与切面类型一一对应的同一感兴趣区域对应的形状分割模型,并存储。例如,针对心脏组织目标,分别在系统中存储心室的四腔心、两腔心标准切面对应的两种形状分割模型;针对胃组织目标,分别在系统中存储胃腔沿胃长度的切面和胃径向切面分别对应的椭圆和圆形分割模型。
步骤S14,根据选定的形状分割模型,对上述超声图像中的感兴趣区域进行分割处理,获得上述感兴趣区域在上述超声图像中的轮廓。在本发明的其中一个实施例中,针对心脏组织的超声图像,感兴趣区域可以为超声图像中的心内膜。
步骤S16,在上述超声图像中显示上述轮廓,并标记上述轮廓的位置和形状。更进一步地,还可以在显示界面或超声图像上显示当前的切面类型。
可以理解的是,在步骤S14之后可以进一步包括如下步骤:对获得的上述轮廓进行有效性检验,提示用户上述轮廓是否有误,用以提示用户是否选择介入图像分割过程,从而改善图像分割处理过程的精确度,提升设备使用体验感。
具体地,可以采用下述之至少一种方法来对获得的上述获得的轮廓进行有效性检验:
第一,基于感兴趣区域在解剖学结构中与其他组织之间的位置关系,检验上述轮廓的有效性。例如,在心脏超声图像中,心内膜的轮廓分割位置与心室位置的位置关系、心内膜的轮廓分割位置与二尖瓣或三尖瓣的位置关系是否满足正确的解剖学结构关系,等等。
第二,基于感兴趣区域在解剖学结构中的参数指标,检验上述轮廓的有效性。例如,在心脏超声图像中,心内膜轮廓所围绕的腔室容积是否在正常生理指标范围内,如果超出则表示分割无效。
第三,基于在上述对上述超声图像中的感兴趣区域进行分割处理时上述感兴趣区域的轮廓变异性指标,检验上述轮廓的有效性。例如,这里提到的轮廓变异性指标可以为,分割处理结果相比采用的形状分割模型的平移系数、旋转系数、缩放系数和特征分量的加权系数,判断轮廓变异性指标是否符合某一阈值区间,如果符合某一阈值区间,则判定当前分割结果有效,如果超过某一阈值区间,则判定当前分割结果无效。
第四,采用同一形状分割模型对同一切面类型的多帧超声图像中的感兴趣区域进行分割处理,基于对上述多帧超声图像的分割处理结果的一致性判断,检验上述轮廓的有效性。根据多帧图像分割结果的一致性进行判断,若某一帧图像出现了形状的突变,则判断该帧分割结果无效,形状的突变可通过检测轮廓点位置的变化,和轮廓区域面积,周长等参数的变化进行判断。在判定当前分割结果无效的情况下,可以提示用户检查结果。
对获得的上述获得的轮廓进行有效性检验,当提示用户上述轮廓有误时,可以切换至手动分割图像模式,例如,采用手动的方式在上述超声图像中标记上述心内膜的轮廓,包括标记位置和轮廓形状。
可以理解的是,上述方法还包括以下步骤获得切面类型与形状分割模型的对应关系:
预先获取针对同一目标物体(例如心脏组织)的不同切面类型与同一感兴趣区域(例如心脏心内膜)对应的形状分割模型之间的对应关系,并存储;上述根据上述切面类型选择对应的形状分割模型的步骤中,根据上述切面类型基于上述对应关系获得上述对应的形状分割模型。
如图3所示,示出了本发明一个实施例中预先建立切面类型与形状分割模型之间对应关系的过程的具体流程图,在该实施例中,该过程包括如下的步骤:
步骤S20,在同一目标物体的训练图像集中的每一幅训练图像上标记感兴趣区域的轮廓曲线;
步骤S22,将上述感兴趣区域的轮廓曲线离散成用于描述感兴趣区域形状的地标点;
步骤S24,获得上述每一幅训练图像上用于描述感兴趣区域形状的地标点;
步骤S26,依据上述每一幅训练图像上对应的上述地标点和相应的切面类型,按照不同切面类型构建不同的形状分割模型。
具体地,上述步骤S26可以通过以下两种图像分割方式来实现:
其一,将所有的地标点按照顺序排列在同一坐标系下,获得当前切面类型下感兴趣区域的轮廓曲线的点分布模型,针对上述点分布模型进行特征分析和特征提取,获得用于描述感兴趣区域轮廓的平均形状和特征分量,基于上述平均形状与特征分量获得用于图像分割的形状分割模型,从而获取切面类型与感兴趣区域的形状分割模型之间的对应关系。
其二,基于深度学习方法,将不同切面类型对应的训练图像集中每一幅训练图像上对应的上述地标点输入至多个机器学习模型中进行样本训练,获得多个机器学习模型的网络参数,基于已知的网络参数获取不同切面类型与多个机器学习模型之间的对应关系,从而获取切面类型与感兴趣区域的形状分割模型之间的对应关系。
可见,上述形状分割模型可以为:基于已知切面类型的切面图像,对感兴趣区域的轮廓曲线进行特征分析和特征提取,获得的用于描述感兴趣区域轮廓的平均形状和特征分量;或者,基于已知切面类型和感兴趣区域轮廓曲线的训练图像集,根据深度学习方法获得的至少一个机器学习模型,例如卷积神经网络CNN。可以一种切面类型对应一个机器学习模型(如卷积神经网络CNN)。
为了更清楚地描述上述两种图像分割方式来构建形状分割模型的方法,下面分别进行举例说明。
如图4所示,为图3中采用的第一种构建形状分割模型过程的示意图;
对于构建某一类型的心内膜的形状分割模型,首先由医生或者专业人士在训练图像(401)上手动标记心内膜的形状和位置(402),然后将手动标定的心内膜曲线离散成若干个可以描述心内膜形状的地标点(403),对训练图像集中的图像都进行上述操作,获得每一幅图像上描述心内膜位置和形状的地标点。然后依据这些地标点,按照不同的切面类型,构建不同的心内膜模型;具体地,该形状分割模型的构建方法可以采用诸如主动形状模型(Active Shape Model,ASM)来实现。通过将所有的地标点按照顺序排列在同一坐标系下,获得当前心脏切面类型下心内膜曲线的点分布模型(404)。针对点分布模型进行特征分析和特征提取可以获得用于描述心内膜形状的平均形状和若干特征分量,将平均形状与特征分量进行加权求和可以获得不同的形状。对平均形状与特征分量进行加权以及进行缩放、平移和旋转操作 可以在图像中的任意位置生成心内膜的位置和形状,也就是形成了形状分割模型。该形状分割模型可以用平均形状、若干特征分量和这些分量的加权系数、平移系数、旋转系数、缩放系数来描述。
如图5所示,为图3中采用的第二种构建形状分割模型过程的示意图;
在如图3中获得地标点之后,可以采用诸如卷积神经网络(CNN)等新兴的深度学习的方法来实现。利用深度学习的方法,可以通过构建多层神经网络来建立切面类型与感兴趣区域(心内膜轮廓)的形状分割模型之间的对应关系,从而建立输入图像和心内膜轮廓的映射关系。如图6所示,其中多层神经网络包括一个输入层(501),一个输出层(503)和n(n>=0)个中间层(502)。输入层是心脏图像的单帧二维图像,输出层是对应此二维图像的心内膜轮廓的点的二维坐标,中间层可以有多个,它接受上一层节点的输出,并作为下一层节点的输入。中间层可以有不同的类型,最常见的是全连接层(full connections,指相邻层的每一个节点与下一层所有节点都有连接),另外,卷积神经网络中用到一种卷积层(Convolutions),它主要应用于图像识别领域,其特点是利用卷积核对图像逐层卷积进行特征提取。在确定网络结构后,可以利用一组训练样本(包括多组二维心脏图像及其对应的心内膜轮廓)对网络进行训练从而获得网络的参数(即对应关系)。训练完成后,给定一帧新的心脏二维图像,把其输入到此神经网络模型中,得到的输出就是对应于此帧图像的心内膜轮廓点的坐标。
本领域技术人员可以理解的是,上述仅示出了针对一种切面类型的两种形状分割模型的方法,其旨在帮助本领域技术人员更好的理解本发明。本发明并不排除采用其他的图像分割方式来构建形状分割模型。
基于此,在本发明的其中一些实施例中,上述步骤S10和步骤S12中,用于识别超声图像中目标物体的切面类型和根据所述切面类型选择对应的形状分割模型的步骤中还包括:
A、显示至少一种图像分割方式供用户选择;
B、接收用户择一选择图像分割方式的选择指令,确定图像分割方式;
C、识别超声图像中目标物体的切面类型;
D、根据所述切面类型和图像分割方式,选择对应的形状分割模型。
其中,步骤D中在根据所述切面类型和图像分割方式,选择对应的形状分割模型的步骤之前还包括:
获取针对同一目标物体的不同切面类型、形状分割模型所采用的图像分割方式、以及同一感兴趣区域对应的形状分割模型之间的映射关系,并存储;
步骤D中根据所述切面类型和图像分割方式,基于所述映射关系获得对应的形状分割模型。
上述各个步骤的描述顺序并不是唯一的,本发明并不限定其先后顺序。本实施例中提到的图像分割方式可以是前文提到的以下两种方式:
基于已知切面类型的切面图像,对感兴趣区域的轮廓曲线进行特征分析和特征提取,用以获得用于描述感兴趣区域轮廓的平均形状和特征分量,形成对应的形状分割模型;
基于已知切面类型和感兴趣区域轮廓曲线的训练图像集,将根据深度学习方法获得的至少一个机器学习模型,作为形状分割模型,在一个例子中,该机器学习模型可以是诸如卷积神经网络CNN。可以一种切面类型对应一个机器学习模型(如卷积神经网络CNN)。
当然本文并不限定以上两种方式,还可以采用其他图像分割方式,例如当设备提供多种图像分割方式时,可以通过调整图像基础参数来获得不同的图像分割方式,从而获得对应于不同精度或准确度等指标的形状分割模型。此种工作模式可以提供给用户作为图像处理的研究使用,用以辨析不同后端图像处理模式下的效果和图像质量。
如图6所示,示出了图1中步骤S10的更详细的流程图;其中,该识别超声图像中目标物体的切面类型的步骤(即步骤S10)进一步包括:
步骤S100,获取单帧的待分割超声图像或者心脏电影文件中的若干帧待分割超声图像,然后对上述待分割超声图像进行归一化处理;
步骤S101,将归一化处理处理后的待分割超声图像映射到特征空间,上述特征空间通过对训练集图像中的特征进行提取来构建;
步骤S102,将待分割超声图像在特征空间的投影与训练图像在特征空间中的投影进行比较,确定待分割的超声图像的类型。
一并结合图7所示,示出了图6中步骤S100的更详细的流程图。其中,该步骤S100进一步包括:
步骤S103,识别待分割超声图像中特定目标的位置;例如针对心脏超声图像,可以考虑将室间隔作为特定目标。
步骤S104,根据上述特定目标的位置对上述待分割超声图像进行旋转,使上述待分割超声图像中的上述感兴趣区域主腔室的长轴方向为竖直。例如,将左心室作为感兴趣区域主腔室。
步骤S105,平移上述待分割超声图像,将上述待分割超声图像中的上述主腔室位置调整到图像的中心。
步骤S106,对待分割超声图像的灰度均值和方差进行统一。
可以理解的是,在步骤S10识别超声图像中目标物体的切面类型之后,还可以包括如下步骤:
显示上述切面类型的识别结果供用户确认;
接收用户认为当前待分割超声图像的切面类型判定错误的指令,切换到用户可自定义修改切面类型的模式。
如图8所示,示出了本发明图1中步骤S14的一个实施例的详细流程图。在该实施例中,该上述根据选定的形状分割模型,对上述超声图像中的感兴趣区域进行分割处理,获得上述感兴趣区域在上述超声图像中的轮廓的步骤(步骤S14)进一步包括:
步骤S140,根据选定的形状分割模型,在待分割的超声图像中生成一个描述上述感兴趣区域的轮廓位置和形状的初始曲线;
步骤S141,在上述初始曲线附近寻找至少两个图像灰度值变化最大的点,并根据初始曲线附近图像灰度值变化最大的点与初始曲线的位置关系,计算形状分割模型中的平移系数、旋转系数、缩放系数和特征分量的加权系数;
步骤S142,根据计算得到的加权系数更新上述初始曲线的形状和位置,获得一个新的曲线;
步骤S143,判断上述新的曲线在曲线附近图像灰度值变化最大的点之间的匹配程度是否满足预设条件,若满足预设条件,则在步骤S144中,确定上述新的曲线为上述感兴趣区域在上述超声图像中的轮廓;若不满足预设条件,则在步骤S155中将上述新的曲线作为初始曲线,流程转至步骤S141,并重复后续步骤;其中,该预设条件可以是诸如迭代次数,误差阈值等。
具体地,如图9所示,示出了一个实施例中获得心内膜轮廓的过程示意图;其中,首先在超声图像(601)中生成一个描述心内膜位置和形状的初始曲线(603),然后在初始曲线(603)附近寻找若干图像灰度值变化最大的点(602),根据初始曲线附近图像灰度值变化最大的点与初始曲线的位置关系,计算形状分割模型中的平移系数、旋转系数、缩放系数和特征分量的加权系数,然后根据计算得到的系数更新初始曲线的形状和位置,从而在新的超声图像(604)中获得一个新的曲线(605),然后在新的曲线附近寻找若干根据图像灰度值变化最大的点(606),根据新的曲线附近图像灰度值变化最大的点与新的曲线的位置关系,重新计算平移系数、旋转系数、缩放系数和特征分量的加权系数,然后根据计算得到的系数更新曲线(605)的形状和位置,重复上述步骤,直到在最后的超声图像(607)中所生成的新的曲线(608)与在曲线附近若干图像灰度值变化最大的点(609)之间的匹配程度满足预定条件,所获得的最终曲线即为分割得到的心内膜的形状和位置。
可以理解的是,在一些例子中,其中,在曲线附近查找若干图像灰度值变化最大的点可以是在曲线的若干法向方向上查找灰度变化最大的点;其 中,根据曲线附近图像灰度值变化最大的点与初始的位置关系计算形状分割模型的平移系数、旋转系数、缩放系数和特征分量的加权系数的方法可以是:根据曲线附近灰度值变化最大的点的位置,将形状分割模型中的平均形状放在图像中,使得灰度值变化最大的点与平均形状的距离和最小,根据平均形状与曲线之间的位置关系和大小关系可以计算平移系数、旋转系数和缩放系数,根据平均形状与曲线附近灰度值变化最大的点做构成的形状之间的形状差异,可以计算形状分割模型中的特征分量的加权系数,然后将这些系数带入到形状分割模型中,可以获得一个新的曲线,从中可以看出新的曲线可能在位置、大小、形状上都与之前的曲线均有差异。
如图10所示,示出了本发明图1中步骤S14的另一个实施例的详细流程图。并请一并结合图10所示。在该实施例中,该上述根据选定的形状分割模型,对上述超声图像中的感兴趣区域进行分割处理,获得上述感兴趣区域在上述超声图像中的轮廓的步骤(步骤S14)进一步包括:
步骤S146,向卷积神经网络CNN输入上述超声图像,并利用卷积核对待分割的超声图像逐层卷积,提取特征;
步骤S147,根据选定的形状分割模型,对上述提取的特征进行估算,获得感兴趣区域在超声图像中的轮廓的形状及位置,可参见图11所示,最终输出了一幅标记有心内膜轮廓的超声图像。
图12所示提供了本发明的其中一个实施例。在本实施例中,在执行上述图2中步骤S10至步骤S16的过程中插入步骤S111和步骤S112,步骤S151和步骤S152。
在识别超声图像中目标物体的切面类型(步骤S10)之后,执行步骤S111显示上述切面类型的识别结果供用户确认,当确认错误,则执行步骤S112:切换手动输入模式,用户修改当前切面类型,并进入根据上述切面类型选择对应的形状分割模型(步骤S12)。当确认准确,则直接进入步骤S12。
在执行完步骤S14(即根据选定的形状分割模型,对上述超声图像中的感兴趣区域进行分割处理,获得上述感兴趣区域在上述超声图像中的轮廓)之后,执行步骤S151:对获得的上述轮廓进行有效性检验,提示用户上述轮廓是否有误,如果轮廓分割无效,则执行步骤S152:切换手动分割图像模式,用户在超声图像上手动输入感兴趣区域的轮廓,然后再在上述超声图像中显示上述轮廓,并标记上述轮廓的位置和形状(步骤S16)。反之,则直接进入步骤S16。
相应地,本发明还提供了一种在超声图像中识别感兴趣目标的轮廓的系统。如图13所示,示出了本发明提供的一种在超声图像中识别感兴趣目标的轮廓的系统的一个实施例的结构示意图。在该实施例中,该系统包括:
切面类型识别模块11,用于识别超声图像中目标物体的切面类型;上述切面类型包括医学解剖学或超声成像上对目标物体的标准切面。
形状分割模型选择模块12,用于根据上述切面类型选择对应的形状分割模型;
轮廓获得模块13,用于根据选定的形状分割模型,对上述超声图像中的感兴趣区域进行分割处理,获得上述感兴趣区域在上述超声图像中的轮廓;
显示标记模块14,用于在所述超声图像中显示所述轮廓、标记所述轮廓的位置和形状,以及显示所述切面类型。
上述系统还包括:
轮廓检验模块16,用于对获得的上述轮廓进行有效性检验,提示用户上述轮廓是否有误;
手动模式切换模块17,用于当上述轮廓检验模块提示用户上述轮廓有误时,切换至手动分割图像模式。
具体地,上述轮廓检验模块基于下述至少一种因素对对获得的上述轮廓进行有效性检验:
基于感兴趣区域在解剖学结构中与其他组织之间的位置关系,检验上述轮廓的有效性;
基于感兴趣区域在解剖学结构中的参数指标,检验上述轮廓的有效性;
基于在上述对上述超声图像中的感兴趣区域进行分割处理时上述感兴趣区域的轮廓变异性指标,检验上述轮廓的有效性;
采用同一形状分割模型对同一切面类型的多帧超声图像中的感兴趣区域进行分割处理,基于对上述多帧超声图像的分割处理结果的一致性判断,检验上述轮廓的有效性。
其中,上述系统还包括:
对应关系处理模块15,用于获取针对同一目标物体的不同切面类型与同一感兴趣区域对应的形状分割模型之间的对应关系,并存储;
上述形状分割模型选择模块12根据上述切面类型基于上述对应关系获得上述对应的形状分割模型。
如图14所示,示出了图13中对应关系处理模块15的一个实施例的结构示意图。其中,上述对应关系处理模块15包括:
训练标记模块,用于在同一目标物体的训练图像集中的每一幅训练图像上标记感兴趣区域的轮廓曲线;
离散模块150,用于将上述感兴趣区域的轮廓曲线离散成用于描述感兴趣区域形状的地标点;
地标点获得模块152,用于获得上述每一幅训练图像上用于描述感兴趣 区域形状的地标点;
形状分割模型构建模块153,用于依据上述每一幅训练图像上对应的上述地标点和相应的切面类型,按照不同切面类型构建不同的形状分割模型。
请一并结合图15所示,示出了图13中形状分割模型构建模块153的结构示意图。其中,上述形状分割模型构建模块153包括:
第一构建模块154,用于将所有的地标点按照顺序排列在同一坐标系下,获得当前切面类型下感兴趣区域的轮廓曲线的点分布模型,针对上述点分布模型进行特征分析和特征提取,获得用于描述感兴趣区域轮廓的平均形状和特征分量,基于上述平均形状与特征分量获得用于图像分割的形状分割模型,从而获取切面类型与感兴趣区域的形状分割模型之间的对应关系;或者,
第二构建设模块152,用于基于深度学习方法,将不同切面类型对应的训练图像集中每一幅训练图像上对应的上述地标点输入至多个机器学习模型中进行样本训练,获得多个机器学习模型的网络参数,基于已知的网络参数获取不同切面类型与多个机器学习模型之间的对应关系,从而获取切面类型与感兴趣区域的形状分割模型之间的对应关系。
如图16所示,示出了本发明图13中切面类型识别模块11的具体结构示意图。其中,上述切面类型识别模块11包括:
归一化处理模块110,用于获取单帧的待分割超声图像或者心脏电影文件中的若干帧待分割超声图像,然后对上述待分割超声图像进行归一化处理;
特征空间映射模块111,用于将归一化处理处理后的待分割超声图像映射到特征空间,上述特征空间通过对训练集图像中的特征进行提取来构建;
比较确定模块112,用于将待分割超声图像在特征空间的投影与训练图像在特征空间中的投影进行比较,确定待分割的超声图像的切面类型;
识别结果确认模块113,用于显示上述切面类型的识别结果供用户确认;
切换模块114,用于在接收用户认为当前待分割超声图像的切面类型判定错误的指令,切换到用户可自定义修改切面类型的模式。
如图17所示,示出了图16中归一化处理模块110的具体结构示意图;其中,上述归一化处理模块110包括:
位置识别模块115,用于识别待分割超声图像中特定目标的位置;
旋转处理模块116,用于根据上述特定目标的位置对上述待分割超声图像进行旋转,使上述待分割超声图像中的上述感兴趣区域主腔室的长轴方向为竖直;
平移处理模块117,用于平移上述待分割超声图像,将上述待分割超声图像中的上述主腔室位置调整到图像的中心;
统一处理模块118,用于对待分割超声图像的灰度均值和方差进行统一处理。
如图18所示,示出了本发明图13中轮廓获得模块13的一个实施例的结构示意图;在该实施例中,上述轮廓获得模块13包括:
初始曲线生成模块130,用于根据选定的形状分割模型,在待分割的超声图像中生成一个描述上述感兴趣区域的轮廓位置和形状的初始曲线;
特征点寻找模块131,用于在上述初始曲线附近寻找至少两个图像灰度值变化最大的点;
加权系数计算单元132,用于根据上述特征点寻找模块131获得的图像灰度值变化最大的点与初始曲线的位置关系,计算心内膜模型中的平移系数、旋转系数、缩放系数和特征分量的加权系数;
曲线调整单元134,用于根据上述加权系数计算单元所计算得到的加权系数更新上述初始曲线的形状和位置,获得一个新的曲线;
曲线判断处理单元135,用于判断上述新的曲线在曲线附近图像灰度值变化最大的点之间的匹配程度是否满足预设条件,若满足预设条件,则确定上述新的曲线为上述感兴趣区域在上述超声图像中的轮廓;若不满足预设条件,则将上述新的曲线作为初始曲线,并通知上述特征点寻找模块131。
如图19所示,示出了本发明图13中轮廓获得模块13的另一个实施例的结构示意图;在该实施例中,上述轮廓获得模块13包括:
卷积处理单元136,用于向卷积神经网络CNN输入上述超声图像,并利用卷积核对待分割的超声图像逐层卷积,提取特征;
估算处理单元137,用于根据选定的形状分割模型,对上述卷积处理单元提取的特征进行估算,获得感兴趣区域在超声图像中的轮廓的形状及位置。
更多的细节,可参见前述对图1至图12的描述,在些不进行详述。
基于图1所示的系统结构,在本发明的其中一个实施例中,还提供了一种识别超声图像中感兴趣区域轮廓的系统,其包括:
探头;
发射电路,用于向目标物体发射超声波束;
接收电路和波束合成模块,用于获得超声回波信号;
图像处理模块,用于根据超声回波信号获得超声图像,识别上述超声图像中目标物体的切面类型,根据上述切面类型选择对应的形状分割模型,根据选定的形状分割模型,对上述超声图像中的感兴趣区域进行分割处理,获得上述感兴趣区域在上述超声图像中的轮廓;及
显示器,用于显示上述超声图像及上述轮廓、标记上述轮廓的位置和形 状,以及显示上述切面类型。
上述各个组件或模块的对应关系参见前述图1中的说明。图像处理模块执行图2中的各个步骤过程,在此不再重复累述,针对各个步骤过程的具体解释可参见前文相关内容的说明。本文中提到的图像处理模块可以采用一片处理器或多片处理器构成。
在本发明的其中一个实施例中,上述目标物体为心脏,上述感兴趣区域为心脏心内膜。
在本发明的其中一个实施例中,上述系统还包括:存储模块,用于存储针对同一目标物体的不同切面类型与同一感兴趣区域对应的形状分割模型之间的对应关系。这里的存储模块可以采用一个存储芯片或多个存储芯片的集合。
在本发明的其中一个实施例中,上述显示器提示用户上述轮廓是否有误,上述系统还包括:用于接收用户输入控制命令的操作控制模块,
当显示器提示用户上述轮廓有误时,图像处理模块切换至手动分割图像模式,用户通过操作控制模块可在超声图像上描绘感兴趣区域的轮廓。
在本发明的其中一个实施例中,上述显示器显示上述切面类型的识别结果供用户确认,上述系统还包括:
用于接收用户输入控制命令的操作控制模块,
当用户输入的控制命令为识别结果确认错误时,图像处理模块切换手动输入模式,根据用户输入的控制命令修改当前切面类型并显示。
在本发明的其中一个实施例中,上述形状分割模型为:基于已知切面类型的切面图像,对感兴趣区域的轮廓曲线进行特征分析和特征提取,获得的用于描述感兴趣区域轮廓的平均形状和特征分量;或者,基于已知切面类型和感兴趣区域轮廓曲线的训练图像集,根据深度学习方法获得的至少一个机器学习模型。
在本发明的其中一个实施例中,上述根据选定的形状分割模型,对上述超声图像中的感兴趣区域进行分割处理,获得上述感兴趣区域在上述超声图像中的轮廓之后,上述图像处理模块还包括:对获得的上述轮廓进行有效性检验,并在显示器上提示用户上述轮廓是否有误。
在本发明的其中一个实施例中,上述图像处理模块对获得的上述轮廓进行有效性检验的过程至少包括以下之一:
基于感兴趣区域在解剖学结构中与其他组织之间的位置关系,检验上述轮廓的有效性;
基于感兴趣区域在解剖学结构中的参数指标,检验上述轮廓的有效性;
基于在上述对上述超声图像中的感兴趣区域进行分割处理时上述感兴 趣区域的轮廓变异性指标,检验上述轮廓的有效性;和
采用同一形状分割模型对同一切面类型的多帧超声图像中的感兴趣区域进行分割处理,基于对上述多帧超声图像的分割处理结果的一致性判断,检验上述轮廓的有效性。
在本发明的其中一个实施例中,上述图像处理模块采用以下方式识别超声图像中目标物体的切面类型:
获取单帧的待分割超声图像或者心脏电影文件中的若干帧待分割超声图像,然后对上述待分割超声图像进行归一化处理;
将归一化处理处理后的待分割超声图像映射到特征空间,上述特征空间通过对训练集图像中的特征进行提取来构建;
将待分割超声图像在特征空间的投影与训练图像在特征空间中的投影进行比较,确定待分割的超声图像的类型。
本发明提供的识别超声图像中感兴趣区域轮廓的方法及系统,首先对需要超声图像中目标物体(如:心脏)的切面类型进行识别和分类,然后针对不同的切面类型选择对应的形状分割模型,并自动将超声图像中的感兴趣区域(如:心内膜)进行分割处理,获得感兴趣区域在超声图像中的轮廓。在实际应用中,该方法和系统能够针对不同心脏切面图像中的心内膜的形状和位置的差异进行有效地分割和识别,提升了分割的准确率。
同时,在确定切面类型时,以及在分割感兴趣区域时,可以提供使用者进行确认,并切换至手工修改模式,可以进一步提高分割的准确率。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (28)

  1. 一种识别超声图像中感兴趣区域轮廓的方法,其中,包括:
    识别超声图像中目标物体的切面类型;
    根据所述切面类型选择对应的形状分割模型;
    根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处理,获得所述感兴趣区域的轮廓。
  2. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述方法还包括:
    在所述超声图像中显示所述轮廓、标记所述轮廓的位置和形状,以及显示所述切面类型。
  3. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述根据所述切面类型选择对应的形状分割模型的步骤之前还包括:
    获取针对同一目标物体的不同切面类型与同一感兴趣区域对应的形状分割模型之间的对应关系,并存储;
    所述根据所述切面类型选择对应的形状分割模型的步骤中,根据所述切面类型基于所述对应关系获得所述对应的形状分割模型。
  4. 如权利要求3所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述获取针对同一目标物体的不同切面类型与同一感兴趣区域对应的形状分割模型之间的对应关系的步骤包括:
    在同一目标物体的训练图像集中的每一幅训练图像上标记感兴趣区域的轮廓曲线;
    将所述感兴趣区域的轮廓曲线离散成用于描述感兴趣区域形状的地标点;
    获得所述每一幅训练图像上用于描述感兴趣区域形状的地标点;
    依据所述每一幅训练图像上对应的所述地标点和相应的切面类型,按照不同切面类型构建不同的形状分割模型。
  5. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述形状分割模型为:
    基于已知切面类型的切面图像,对感兴趣区域的轮廓曲线进行特征分析和特征提取,获得的用于描述感兴趣区域轮廓的平均形状和特征分量;或者,
    基于已知切面类型和感兴趣区域轮廓曲线的训练图像集,根据深度学习方法获得的至少一个机器学习模型。
  6. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处 理,获得所述感兴趣区域的轮廓之后,还包括:
    对获得的所述轮廓进行有效性检验,提示用户所述轮廓是否有误。
  7. 如权利要求6所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述对获得的所述轮廓进行有效性检验的步骤至少包括以下之一:
    基于感兴趣区域在解剖学结构中与其他组织之间的位置关系,检验所述轮廓的有效性;
    基于感兴趣区域在解剖学结构中的参数指标,检验所述轮廓的有效性;
    基于在所述对所述超声图像中的感兴趣区域进行分割处理时所述感兴趣区域的轮廓变异性指标,检验所述轮廓的有效性;和
    采用同一形状分割模型对同一切面类型的多帧超声图像中的感兴趣区域进行分割处理,基于对所述多帧超声图像的分割处理结果的一致性判断,检验所述轮廓的有效性。
  8. 如权利要求6所述的识别超声图像中感兴趣区域轮廓的方法,其中,当提示用户所述轮廓有误时,切换至手动分割图像模式。
  9. 如权利要求6所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述切面类型包括医学解剖学或超声成像上对目标物体的标准切面。
  10. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述识别超声图像中目标物体的切面类型的步骤包括:
    获取单帧的待分割超声图像或者心脏电影文件中的若干帧待分割超声图像,然后对所述待分割超声图像进行归一化处理;
    将归一化处理处理后的待分割超声图像映射到特征空间,所述特征空间通过对训练集图像中的特征进行提取来构建;
    将待分割超声图像在特征空间的投影与训练图像在特征空间中的投影进行比较,确定待分割的超声图像的类型。
  11. 如权利要求10所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述对所述待分割超声图像进行归一化处理包括:
    识别待分割超声图像中特定目标的位置;
    根据所述特定目标的位置对所述待分割超声图像进行旋转,使所述待分割超声图像中的所述感兴趣区域主腔室的长轴方向为竖直;
    平移所述待分割超声图像,将所述待分割超声图像中的所述主腔室位置调整到图像的中心;
    对待分割超声图像的灰度均值和方差进行统一。
  12. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述识别超声图像中目标物体的切面类型之后,还包括:
    显示所述切面类型的识别结果供用户确认;
    接收用户认为当前待分割超声图像的切面类型判定错误的指令,切换到用户可自定义修改切面类型的模式。
  13. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处理,获得所述感兴趣区域的步骤包括:
    根据选定的形状分割模型,在待分割的超声图像中生成一个描述所述感兴趣区域的轮廓位置和形状的初始曲线;
    在所述初始曲线附近寻找至少两个图像灰度值变化最大的点,并根据初始曲线附近图像灰度值变化最大的点与初始曲线的位置关系,计算形状分割模型中的平移系数、旋转系数、缩放系数和特征分量的加权系数,根据计算得到的加权系数更新所述初始曲线的形状和位置,获得一个新的曲线;
    判断所述新的曲线在曲线附近图像灰度值变化最大的点之间的匹配程度是否满足预设条件,若满足预设条件,则确定所述新的曲线为所述感兴趣区域在所述超声图像中的轮廓;若不满足预设条件,则将所述新的曲线作为所述初始曲线,并返回前述根据所述初始曲线获得一个新的曲线的步骤。
  14. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述目标物体为心脏,所述感兴趣区域为心脏心内膜。
  15. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述识别超声图像中目标物体的切面类型和根据所述切面类型选择对应的形状分割模型的步骤中还包括:
    显示至少一种图像分割方式供用户选择;
    接收用户择一选择图像分割方式的选择指令,确定图像分割方式;
    识别超声图像中目标物体的切面类型;
    根据所述切面类型和图像分割方式,选择对应的形状分割模型。
  16. 如权利要求1所述的识别超声图像中感兴趣区域轮廓的方法,其中,所述根据所述切面类型和图像分割方式,选择对应的形状分割模型的步骤之前还包括:
    获取针对同一目标物体的不同切面类型、形状分割模型所采用的图像分割方式、以及同一感兴趣区域对应的形状分割模型之间的映射关系,并存储;
    所述根据所述切面类型和图像分割方式,选择对应的形状分割模型的步骤中,根据所述切面类型和图像分割方式,基于所述映射关系获得对应的形状分割模型。
  17. 一种识别超声图像中感兴趣区域轮廓的系统,其中,包括:
    切面类型识别模块,用于识别超声图像中目标物体的切面类型;
    形状分割模型选择模块,用于根据所述切面类型选择对应的形状分割模 型;
    轮廓获得模块,用于根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处理,获得所述感兴趣区域的轮廓。
  18. 如权利要求17所述的自动识别超声图像中感兴趣区域轮廓的系统,其中,所述系统还包括:
    显示标记模块,用于在所述超声图像中显示所述轮廓、标记所述轮廓的位置和形状,以及显示所述切面类型。
  19. 如权利要求17所述的识别超声图像中感兴趣区域轮廓的系统,其中,所述系统还包括:
    对应关系处理模块,用于获取针对同一目标物体的不同切面类型与同一感兴趣区域对应的形状分割模型之间的对应关系,并存储;
    所述形状分割模型选择模块根据所述切面类型基于所述对应关系获得所述对应的形状分割模型。
  20. 一种识别超声图像中感兴趣区域轮廓的系统,其中,包括:
    探头;
    发射电路,用于向目标物体发射超声波束;
    接收电路和波束合成模块,用于获得超声回波信号;
    图像处理模块,用于根据超声回波信号获得超声图像,识别所述超声图像中目标物体的切面类型,根据所述切面类型选择对应的形状分割模型,根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处理,获得所述感兴趣区域的轮廓;及
    显示器,用于显示所述超声图像及所述轮廓、标记所述轮廓的位置和形状,以及显示所述切面类型。
  21. 如权利要求20所述的识别超声图像中感兴趣区域轮廓的系统,其中,所述目标物体为心脏,所述感兴趣区域为心脏心内膜。
  22. 如权利要求20所述的识别超声图像中感兴趣区域轮廓的系统,其中,所述系统还包括:
    存储模块,用于存储针对同一目标物体的不同切面类型与同一感兴趣区域对应的形状分割模型之间的对应关系。
  23. 如权利要求20所述的识别超声图像中感兴趣区域轮廓的系统,其中,所述显示器提示用户所述轮廓是否有误,所述系统还包括:
    用于接收用户输入控制命令的操作控制模块,
    当显示器提示用户所述轮廓有误时,图像处理模块切换至手动分割图像模式,用户通过操作控制模块可在超声图像上描绘感兴趣区域的轮廓。
  24. 如权利要求20所述的识别超声图像中感兴趣区域轮廓的系统,其 中,所述显示器显示上述切面类型的识别结果供用户确认,所述系统还包括:
    用于接收用户输入控制命令的操作控制模块,
    当用户输入的控制命令为识别结果确认错误时,图像处理模块切换手动输入模式,根据用户输入的控制命令修改当前切面类型并显示。
  25. 如权利要求20所述的识别超声图像中感兴趣区域轮廓的系统,其中,所述形状分割模型为:
    基于已知切面类型的切面图像,对感兴趣区域的轮廓曲线进行特征分析和特征提取,获得的用于描述感兴趣区域轮廓的平均形状和特征分量;或者,
    基于已知切面类型和感兴趣区域轮廓曲线的训练图像集,根据深度学习方法获得的至少一个机器学习模型。
  26. 如权利要求20所述的识别超声图像中感兴趣区域轮廓的系统,其中,所述根据选定的形状分割模型,对所述超声图像中的感兴趣区域进行分割处理,获得所述感兴趣区域的轮廓之后,所述图像处理模块还包括:对获得的所述轮廓进行有效性检验,并在显示器上提示用户所述轮廓是否有误。
  27. 如权利要求26所述的识别超声图像中感兴趣区域轮廓的系统,其中,所述图像处理模块对获得的所述轮廓进行有效性检验的过程至少包括以下之一:
    基于感兴趣区域在解剖学结构中与其他组织之间的位置关系,检验所述轮廓的有效性;
    基于感兴趣区域在解剖学结构中的参数指标,检验所述轮廓的有效性;
    基于在所述对所述超声图像中的感兴趣区域进行分割处理时所述感兴趣区域的轮廓变异性指标,检验所述轮廓的有效性;和
    采用同一形状分割模型对同一切面类型的多帧超声图像中的感兴趣区域进行分割处理,基于对所述多帧超声图像的分割处理结果的一致性判断,检验所述轮廓的有效性。
  28. 如权利要求20所述的识别超声图像中感兴趣区域轮廓的系统,其中,所述图像处理模块采用以下方式识别超声图像中目标物体的切面类型:
    获取单帧的待分割超声图像或者心脏电影文件中的若干帧待分割超声图像,然后对所述待分割超声图像进行归一化处理;
    将归一化处理处理后的待分割超声图像映射到特征空间,所述特征空间通过对训练集图像中的特征进行提取来构建;
    将待分割超声图像在特征空间的投影与训练图像在特征空间中的投影进行比较,确定待分割的超声图像的类型。
PCT/CN2016/081384 2016-05-09 2016-05-09 识别超声图像中感兴趣区域轮廓的方法及系统 WO2017193251A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/081384 WO2017193251A1 (zh) 2016-05-09 2016-05-09 识别超声图像中感兴趣区域轮廓的方法及系统
CN201680082172.5A CN108701354B (zh) 2016-05-09 2016-05-09 识别超声图像中感兴趣区域轮廓的方法及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/081384 WO2017193251A1 (zh) 2016-05-09 2016-05-09 识别超声图像中感兴趣区域轮廓的方法及系统

Publications (1)

Publication Number Publication Date
WO2017193251A1 true WO2017193251A1 (zh) 2017-11-16

Family

ID=60266025

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/081384 WO2017193251A1 (zh) 2016-05-09 2016-05-09 识别超声图像中感兴趣区域轮廓的方法及系统

Country Status (2)

Country Link
CN (1) CN108701354B (zh)
WO (1) WO2017193251A1 (zh)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111096765A (zh) * 2018-10-25 2020-05-05 深圳迈瑞生物医疗电子股份有限公司 超声诊断设备及其快速查找未完成切面的方法、存储介质
WO2020087532A1 (zh) * 2018-11-02 2020-05-07 深圳迈瑞生物医疗电子股份有限公司 超声成像方法及系统、存储介质、处理器和计算机设备
CN111145152A (zh) * 2019-12-24 2020-05-12 上海联影智能医疗科技有限公司 图像检测方法、计算机设备和存储介质
CN111210909A (zh) * 2020-01-13 2020-05-29 青岛大学附属医院 一种基于深度神经网络的直肠癌t分期自动诊断系统及其构建方法
CN111340742A (zh) * 2018-12-18 2020-06-26 深圳迈瑞生物医疗电子股份有限公司 一种超声成像方法及设备、存储介质
CN111340775A (zh) * 2020-02-25 2020-06-26 湖南大学 超声标准切面获取的并行方法、装置和计算机设备
CN111374708A (zh) * 2018-12-28 2020-07-07 深圳迈瑞生物医疗电子股份有限公司 一种胎儿心率检测方法及超声成像装置、存储介质
CN111428713A (zh) * 2020-03-20 2020-07-17 华侨大学 一种基于特征融合的超声图像自动分类法
CN111428812A (zh) * 2020-04-14 2020-07-17 沈阳先进医疗设备技术孵化中心有限公司 医学图像数据的构造方法及装置
CN111429451A (zh) * 2020-04-15 2020-07-17 深圳市嘉骏实业有限公司 医学超声图像分割方法及装置
CN111429412A (zh) * 2020-03-17 2020-07-17 北京青燕祥云科技有限公司 肝包虫超声ai辅助诊断方法及系统
CN111583207A (zh) * 2020-04-28 2020-08-25 于兴虎 一种斑马鱼幼鱼心脏轮廓确定方法及系统
CN111583250A (zh) * 2020-05-14 2020-08-25 上海深至信息科技有限公司 一种基于深度学习的超声图像二尖瓣的定位方法及系统
CN112102230A (zh) * 2020-07-24 2020-12-18 湖南大学 超声切面识别方法、系统、计算机设备和存储介质
CN112336378A (zh) * 2019-08-08 2021-02-09 深圳市恩普电子技术有限公司 一种用于动物超声诊断的m型超声心动图处理方法和系统
CN112733841A (zh) * 2020-12-30 2021-04-30 中冶赛迪重庆信息技术有限公司 钢卷内部紊乱判断方法、系统、设备及介质
CN112996444A (zh) * 2018-08-31 2021-06-18 西诺医疗器械股份有限公司 基于超声和/或光声(oa/us)特征确定癌症分子亚型的方法和系统
CN113257392A (zh) * 2021-04-20 2021-08-13 哈尔滨晓芯科技有限公司 一种超声机普适外接数据自动预处理方法
CN113274056A (zh) * 2021-06-30 2021-08-20 深圳开立生物医疗科技股份有限公司 一种超声扫查方法及相关装置
WO2021222426A1 (en) * 2020-04-28 2021-11-04 EchoNous, Inc. Systems and methods for automated physiological parameter estimation from ultrasound image sequences
CN113838210A (zh) * 2021-09-10 2021-12-24 西北工业大学 一种将超声图像转换为3d模型的方法及装置
US11232604B2 (en) 2020-05-06 2022-01-25 Ebm Technologies Incorporated Device for marking image data
CN114066913A (zh) * 2022-01-12 2022-02-18 广东工业大学 一种心脏图像分割方法及系统
CN114431892A (zh) * 2020-10-30 2022-05-06 通用电气精准医疗有限责任公司 一种超声成像系统及超声成像方法
CN116152344A (zh) * 2023-04-18 2023-05-23 天津德通电气有限公司 一种基于形状数据库识别的选煤方法及系统
CN116823829A (zh) * 2023-08-29 2023-09-29 深圳微创心算子医疗科技有限公司 医疗影像的分析方法、装置、计算机设备和存储介质
CN117243637A (zh) * 2023-10-19 2023-12-19 河北港口集团有限公司秦皇岛中西医结合医院 一种超声心动图图像识别方法
CN117379098A (zh) * 2023-10-17 2024-01-12 齐齐哈尔医学院 一种心脏超声图像增强系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009640B (zh) * 2018-11-20 2023-09-26 腾讯科技(深圳)有限公司 处理心脏视频的方法、设备和可读介质
CN111062948B (zh) * 2019-11-18 2022-09-13 北京航空航天大学合肥创新研究院 一种基于胎儿四腔心切面图像的多组织分割方法
CN113040873A (zh) * 2019-12-27 2021-06-29 深圳市理邦精密仪器股份有限公司 超声图像的图像处理方法、超声设备以及存储介质
CN111915562A (zh) * 2020-07-02 2020-11-10 杭州深睿博联科技有限公司 一种深度学习的儿童超声心动图标准切面识别方法及装置
CN111862190B (zh) * 2020-07-10 2024-04-05 北京农业生物技术研究中心 自动测量离体植物软腐病斑面积的方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004019275A1 (en) * 2002-08-20 2004-03-04 Mirada Solutions Limited Computation of contour
WO2006052681A1 (en) * 2004-11-08 2006-05-18 Siemens Corporate Research, Inc. Method of database-guided segmentation of anatomical structures having complex appearances
CN1934589A (zh) * 2004-03-23 2007-03-21 美国西门子医疗解决公司 为医学成像提供自动决策支持的系统和方法
CN102883662A (zh) * 2011-05-11 2013-01-16 株式会社东芝 医疗图像处理设备以及其方法
CN103845076A (zh) * 2012-12-03 2014-06-11 深圳迈瑞生物医疗电子股份有限公司 超声系统及其检测信息的关联方法和装置
WO2015082269A1 (en) * 2013-12-04 2015-06-11 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure.

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6961454B2 (en) * 2001-10-04 2005-11-01 Siemens Corporation Research, Inc. System and method for segmenting the left ventricle in a cardiac MR image
US8891881B2 (en) * 2012-01-25 2014-11-18 General Electric Company System and method for identifying an optimal image frame for ultrasound imaging
CN104424629B (zh) * 2013-08-19 2018-01-26 深圳先进技术研究院 一种x光胸片肺部分割方法和装置
KR102288308B1 (ko) * 2014-08-05 2021-08-10 삼성메디슨 주식회사 초음파 진단 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004019275A1 (en) * 2002-08-20 2004-03-04 Mirada Solutions Limited Computation of contour
CN1934589A (zh) * 2004-03-23 2007-03-21 美国西门子医疗解决公司 为医学成像提供自动决策支持的系统和方法
WO2006052681A1 (en) * 2004-11-08 2006-05-18 Siemens Corporate Research, Inc. Method of database-guided segmentation of anatomical structures having complex appearances
CN102883662A (zh) * 2011-05-11 2013-01-16 株式会社东芝 医疗图像处理设备以及其方法
CN103845076A (zh) * 2012-12-03 2014-06-11 深圳迈瑞生物医疗电子股份有限公司 超声系统及其检测信息的关联方法和装置
WO2015082269A1 (en) * 2013-12-04 2015-06-11 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure.

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112996444A (zh) * 2018-08-31 2021-06-18 西诺医疗器械股份有限公司 基于超声和/或光声(oa/us)特征确定癌症分子亚型的方法和系统
CN111096765B (zh) * 2018-10-25 2023-06-20 深圳迈瑞生物医疗电子股份有限公司 超声诊断设备及其快速查找未完成切面的方法、存储介质
CN111096765A (zh) * 2018-10-25 2020-05-05 深圳迈瑞生物医疗电子股份有限公司 超声诊断设备及其快速查找未完成切面的方法、存储介质
CN112638267A (zh) * 2018-11-02 2021-04-09 深圳迈瑞生物医疗电子股份有限公司 超声成像方法及系统、存储介质、处理器和计算机设备
WO2020087532A1 (zh) * 2018-11-02 2020-05-07 深圳迈瑞生物医疗电子股份有限公司 超声成像方法及系统、存储介质、处理器和计算机设备
CN112638267B (zh) * 2018-11-02 2023-10-27 深圳迈瑞生物医疗电子股份有限公司 超声成像方法及系统、存储介质、处理器和计算机设备
CN111340742A (zh) * 2018-12-18 2020-06-26 深圳迈瑞生物医疗电子股份有限公司 一种超声成像方法及设备、存储介质
CN111340742B (zh) * 2018-12-18 2024-03-08 深圳迈瑞生物医疗电子股份有限公司 一种超声成像方法及设备、存储介质
CN111374708B (zh) * 2018-12-28 2024-02-20 深圳迈瑞生物医疗电子股份有限公司 一种胎儿心率检测方法及超声成像装置、存储介质
CN111374708A (zh) * 2018-12-28 2020-07-07 深圳迈瑞生物医疗电子股份有限公司 一种胎儿心率检测方法及超声成像装置、存储介质
CN112336378A (zh) * 2019-08-08 2021-02-09 深圳市恩普电子技术有限公司 一种用于动物超声诊断的m型超声心动图处理方法和系统
CN112336378B (zh) * 2019-08-08 2022-05-03 深圳市恩普电子技术有限公司 一种用于动物超声诊断的m型超声心动图处理方法和系统
CN111145152A (zh) * 2019-12-24 2020-05-12 上海联影智能医疗科技有限公司 图像检测方法、计算机设备和存储介质
CN111145152B (zh) * 2019-12-24 2024-02-27 上海联影智能医疗科技有限公司 图像检测方法、计算机设备和存储介质
CN111210909A (zh) * 2020-01-13 2020-05-29 青岛大学附属医院 一种基于深度神经网络的直肠癌t分期自动诊断系统及其构建方法
CN111340775A (zh) * 2020-02-25 2020-06-26 湖南大学 超声标准切面获取的并行方法、装置和计算机设备
CN111340775B (zh) * 2020-02-25 2023-09-29 湖南大学 超声标准切面获取的并行方法、装置和计算机设备
CN111429412B (zh) * 2020-03-17 2023-11-03 北京青燕祥云科技有限公司 肝包虫超声ai辅助诊断方法及系统
CN111429412A (zh) * 2020-03-17 2020-07-17 北京青燕祥云科技有限公司 肝包虫超声ai辅助诊断方法及系统
CN111428713A (zh) * 2020-03-20 2020-07-17 华侨大学 一种基于特征融合的超声图像自动分类法
CN111428713B (zh) * 2020-03-20 2023-04-07 华侨大学 一种基于特征融合的超声图像自动分类法
CN111428812A (zh) * 2020-04-14 2020-07-17 沈阳先进医疗设备技术孵化中心有限公司 医学图像数据的构造方法及装置
CN111428812B (zh) * 2020-04-14 2024-03-08 东软医疗系统股份有限公司 医学图像数据的构造方法及装置
CN111429451B (zh) * 2020-04-15 2024-01-16 深圳市嘉骏实业有限公司 医学超声图像分割方法及装置
CN111429451A (zh) * 2020-04-15 2020-07-17 深圳市嘉骏实业有限公司 医学超声图像分割方法及装置
CN111583207A (zh) * 2020-04-28 2020-08-25 于兴虎 一种斑马鱼幼鱼心脏轮廓确定方法及系统
WO2021222426A1 (en) * 2020-04-28 2021-11-04 EchoNous, Inc. Systems and methods for automated physiological parameter estimation from ultrasound image sequences
US11232604B2 (en) 2020-05-06 2022-01-25 Ebm Technologies Incorporated Device for marking image data
CN111583250A (zh) * 2020-05-14 2020-08-25 上海深至信息科技有限公司 一种基于深度学习的超声图像二尖瓣的定位方法及系统
CN112102230A (zh) * 2020-07-24 2020-12-18 湖南大学 超声切面识别方法、系统、计算机设备和存储介质
CN114431892A (zh) * 2020-10-30 2022-05-06 通用电气精准医疗有限责任公司 一种超声成像系统及超声成像方法
CN114431892B (zh) * 2020-10-30 2024-04-16 通用电气精准医疗有限责任公司 一种超声成像系统及超声成像方法
CN112733841B (zh) * 2020-12-30 2022-12-16 中冶赛迪信息技术(重庆)有限公司 钢卷内部紊乱判断方法、系统、设备及介质
CN112733841A (zh) * 2020-12-30 2021-04-30 中冶赛迪重庆信息技术有限公司 钢卷内部紊乱判断方法、系统、设备及介质
CN113257392B (zh) * 2021-04-20 2024-04-16 哈尔滨晓芯科技有限公司 一种超声机普适外接数据自动预处理方法
CN113257392A (zh) * 2021-04-20 2021-08-13 哈尔滨晓芯科技有限公司 一种超声机普适外接数据自动预处理方法
CN113274056A (zh) * 2021-06-30 2021-08-20 深圳开立生物医疗科技股份有限公司 一种超声扫查方法及相关装置
CN113838210A (zh) * 2021-09-10 2021-12-24 西北工业大学 一种将超声图像转换为3d模型的方法及装置
CN114066913B (zh) * 2022-01-12 2022-04-22 广东工业大学 一种心脏图像分割方法及系统
CN114066913A (zh) * 2022-01-12 2022-02-18 广东工业大学 一种心脏图像分割方法及系统
CN116152344B (zh) * 2023-04-18 2023-07-11 天津德通电气有限公司 一种基于形状数据库识别的选煤方法及系统
CN116152344A (zh) * 2023-04-18 2023-05-23 天津德通电气有限公司 一种基于形状数据库识别的选煤方法及系统
CN116823829B (zh) * 2023-08-29 2024-01-09 深圳微创心算子医疗科技有限公司 医疗影像的分析方法、装置、计算机设备和存储介质
CN116823829A (zh) * 2023-08-29 2023-09-29 深圳微创心算子医疗科技有限公司 医疗影像的分析方法、装置、计算机设备和存储介质
CN117379098A (zh) * 2023-10-17 2024-01-12 齐齐哈尔医学院 一种心脏超声图像增强系统
CN117379098B (zh) * 2023-10-17 2024-05-14 齐齐哈尔医学院 一种心脏超声图像增强系统
CN117243637A (zh) * 2023-10-19 2023-12-19 河北港口集团有限公司秦皇岛中西医结合医院 一种超声心动图图像识别方法
CN117243637B (zh) * 2023-10-19 2024-04-19 河北港口集团有限公司秦皇岛中西医结合医院 一种超声心动图图像识别方法

Also Published As

Publication number Publication date
CN108701354B (zh) 2022-05-06
CN108701354A (zh) 2018-10-23

Similar Documents

Publication Publication Date Title
WO2017193251A1 (zh) 识别超声图像中感兴趣区域轮廓的方法及系统
WO2017206023A1 (zh) 一种心脏容积识别分析系统和方法
RU2657855C2 (ru) Система трехмерной ультразвуковой визуализации
JP5108905B2 (ja) 3dデータセット中の画像ビューを自動的に特定する方法および装置
US11100665B2 (en) Anatomical measurements from ultrasound data
US20160081663A1 (en) Method and system for automated detection and measurement of a target structure
JP5536678B2 (ja) 医用画像表示方法、医用画像診断装置、及び医用画像表示装置
CN102763135A (zh) 用于自动分割和时间跟踪的方法
JP2012506283A (ja) 3次元超音波画像化
KR102063374B1 (ko) 초음파 볼륨의 자동 정렬
CN110477952B (zh) 超声波诊断装置、医用图像诊断装置及存储介质
US20210077062A1 (en) Device and method for obtaining anatomical measurements from an ultrasound image
JP2021533920A (ja) バイオメトリック測定及び品質評価
US11534133B2 (en) Ultrasonic detection method and ultrasonic imaging system for fetal heart
JP6739318B2 (ja) 超音波診断装置
CN114795276A (zh) 用于从超声图像自动估计肝肾指数的方法和系统
CN112447276A (zh) 用于提示数据捐赠以用于人工智能工具开发的方法和系统
WO2020007026A1 (zh) 分割模型训练方法、装置及计算机可读存储介质
CN114271850B (zh) 超声检测数据的处理方法及超声检测数据的处理装置
EP4076207B1 (en) A method and system for improved ultrasound plane acquisition
CN112336378B (zh) 一种用于动物超声诊断的m型超声心动图处理方法和系统
US20240173007A1 (en) Method and apparatus with user guidance and automated image setting selection for mitral regurgitation evaluation
EP4059441B1 (en) Apparatus for evaluating movement state of heart
US20230329674A1 (en) Ultrasound imaging
JP2024520236A (ja) 超音波撮像システム

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16901207

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16901207

Country of ref document: EP

Kind code of ref document: A1