CN116568223A - Ultrasonic imaging method and ultrasonic imaging system for fetal skull - Google Patents

Ultrasonic imaging method and ultrasonic imaging system for fetal skull Download PDF

Info

Publication number
CN116568223A
CN116568223A CN202080107457.6A CN202080107457A CN116568223A CN 116568223 A CN116568223 A CN 116568223A CN 202080107457 A CN202080107457 A CN 202080107457A CN 116568223 A CN116568223 A CN 116568223A
Authority
CN
China
Prior art keywords
dimensional
skull
region
ultrasonic
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080107457.6A
Other languages
Chinese (zh)
Inventor
张明
邹耀贤
林穆清
贺豪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Publication of CN116568223A publication Critical patent/CN116568223A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

An ultrasound imaging method and an ultrasound imaging system for a fetal skull, the method comprising: the processor (116) controls the ultrasonic probe to emit ultrasonic waves to the cranium of the fetus to be tested and receives the echo of the ultrasonic waves so as to obtain an echo signal of the ultrasonic waves (S210); a processor (116) obtains three-dimensional ultrasonic data of the cranium of the fetus to be tested based on the echo signals of the ultrasonic waves (S220); a processor (116) determines a target position based on the three-dimensional ultrasound data, the target position being a position that places a skull region in the three-dimensional ultrasound data at a rendering angle (S230); the processor (116) rotates the three-dimensional ultrasound data to a target position based on the target position (S240); a processor (116) determines a skull region in the three-dimensional ultrasound data (S250); the processor (116) renders the skull region in the rotated three-dimensional ultrasonic data to obtain a rendered image, and controls the display (118) to display the rendered image (S260), so that automatic imaging of the fetal skull can be realized, and the efficiency and the accuracy of fetal skull ultrasonic inspection are improved.

Description

Ultrasonic imaging method and ultrasonic imaging system for fetal skull
Description
Technical Field
The present application relates to the field of ultrasound imaging technology, and more particularly to an ultrasound imaging method and an ultrasound imaging system for fetal skull.
Background
The development of the fetal skull is commonly measured clinically by ultrasound imaging of the fetal skull. The method has important significance for accurately developing the bone structure of the skull of the fetus, and for selecting the delivery mode of the pregnant woman and treating the abnormal conditions in clinic. However, the fetal skull has a complex shape, thin bones and obvious radian change, the overall imaging difficulty is high, the general structures of fetal fontanel and craniocerebral joints are difficult to be completely displayed by conventional two-dimensional ultrasound, and the boundary between the craniocerebral joints is often difficult to distinguish, so that the diagnosis result of the two-dimensional ultrasound is relatively rough, and the diagnosis conclusion is not accurate enough.
The three-dimensional ultrasound can intuitively display the skull morphology of the fetus, can vividly express the anatomical connection relation between the skull, ensures that the skull suture and fontanel structures at the top of the fetus are clear and easy to distinguish, and makes up the defect of the two-dimensional ultrasound in space expression. However, three-dimensional ultrasound apical imaging is more difficult than frontal and lateral imaging, often requiring a slight rotation from the lateral or posterior imaging to view structures of interest such as sagittal slots. The general operation flow of the three-dimensional ultrasonic examination is as follows: the clinician firstly frames a proper region of interest for three-dimensional imaging, then manually rotates the acquired three-dimensional data to observe sagittal suture and fontanel, and finally sets a proper rendering mode to display the bony structure of the skull, which is complex and time-consuming.
Disclosure of Invention
In the summary, a series of concepts in a simplified form are introduced, which will be further described in detail in the detailed description. The summary of the invention is not intended to define the key features and essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
A first aspect of embodiments of the present application provides a method of ultrasound imaging of a fetal skull, the method comprising:
the processor controls the ultrasonic probe to emit ultrasonic waves to the cranium of the fetus to be tested and receives the echo of the ultrasonic waves so as to obtain echo signals of the ultrasonic waves;
the processor obtains three-dimensional ultrasonic data of the cranium of the fetus to be tested based on the echo signals of the ultrasonic waves;
the processor determines a target position based on the three-dimensional ultrasonic data, wherein the target position is a position for enabling a skull region in the three-dimensional ultrasonic data to be under a rendering angle;
the processor rotates the three-dimensional ultrasound data to the target position based on the target position;
the processor determining a skull region in the three-dimensional ultrasound data;
And the processor renders the skull region in the rotated three-dimensional ultrasonic data to obtain a rendered image, and controls a display to display the rendered image.
A second aspect of embodiments of the present application provides a method of ultrasound imaging of a fetal skull, the method comprising:
the processor obtains three-dimensional ultrasonic data of the cranium of the fetus to be tested;
the processor determining a target position based on the three-dimensional ultrasound data;
the processor rotates the three-dimensional ultrasound data to the target position based on the target position;
the processor determining a skull region in the three-dimensional ultrasound data;
and the processor renders the skull region in the rotated three-dimensional ultrasonic data to obtain a rendered image, and controls a display to display the rendered image.
A third aspect of embodiments of the present application provides an ultrasound imaging method of a fetal skull, the method comprising:
acquiring three-dimensional ultrasonic data of the cranium of a fetus to be tested;
determining a skull region in the three-dimensional ultrasound data based on skull image features of the fetus;
rendering the skull region to obtain a rendered image;
and displaying the rendered image.
A fourth aspect of the present embodiments provides an ultrasound imaging system, comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested;
a receiving circuit for controlling the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor for performing the steps of the ultrasound imaging method of a fetal skull according to the first aspect of the embodiments of the present invention;
and the display is used for displaying the rendered image obtained by the processor.
A fifth aspect of embodiments of the present application provides an ultrasound imaging system comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested;
the receiving circuit is used for controlling the ultrasonic probe to receive the ultrasonic wave echo so as to obtain an ultrasonic echo signal;
a processor for performing the steps of the ultrasound imaging method of a fetal skull according to the second aspect of the embodiments of the present invention;
and the display is used for displaying the rendered image obtained by the processor.
A sixth aspect of embodiments of the present application provides an ultrasound imaging system, comprising:
An ultrasonic probe;
the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested;
a receiving circuit for controlling the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor for performing the steps of the method for ultrasound imaging of fetal skull according to the third aspect of the embodiments of the present invention;
and the display is used for displaying the rendered image obtained by the processor.
According to the ultrasonic imaging method and the ultrasonic imaging system of the fetal skull, automatic imaging of the fetal skull can be achieved, manual operation of a user is greatly reduced, and efficiency and accuracy of ultrasonic inspection of the fetal skull are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
In the drawings:
FIG. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application;
FIG. 2 shows a schematic flow chart of a method of ultrasound imaging of a fetal skull according to an embodiment of the invention;
FIG. 3 shows a feature structure diagram of a fetal skull versus a rendered image in accordance with an embodiment of the invention;
FIG. 4 shows a schematic diagram of determining a region of interest based on CMPR according to one embodiment of the present invention;
FIG. 5 shows a schematic flow chart of a method of ultrasound imaging of a fetal skull according to another embodiment of the invention;
fig. 6 shows a schematic flow chart of a method of ultrasound imaging of a fetal skull according to yet another embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail below with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. Based on the embodiments of the present application described herein, all other embodiments that may be made by one skilled in the art without the exercise of inventive faculty are intended to fall within the scope of protection of the present application.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced without one or more of these details. In other instances, some features well known in the art have not been described in order to avoid obscuring the present application.
It should be understood that the present application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
For a thorough understanding of the present application, detailed structures will be presented in the following description in order to illustrate the technical solutions presented herein. Alternative embodiments of the present application are described in detail below, however, the present application may have other implementations in addition to these detailed descriptions.
Next, an ultrasound imaging system according to an embodiment of the present application is described first with reference to fig. 1, fig. 1 showing a schematic block diagram of an ultrasound imaging system 100 according to an embodiment of the present application.
As shown in fig. 1, the ultrasound imaging system 100 includes an ultrasound probe 110, transmit circuitry 112, receive circuitry 114, a processor 116, and a display 118. Further, the ultrasound imaging system may further include a transmit/receive selection switch 120 and a beam synthesis module 122, and the transmit circuit 112 and the receive circuit 114 may be connected to the ultrasound probe 110 through the transmit/receive selection switch 120.
The ultrasonic probe 110 includes a plurality of transducer elements, and the plurality of transducer elements may be arranged in a row to form a linear array or in a two-dimensional matrix to form an area array, and the plurality of transducer elements may also form a convex array. The transducer array elements are used for transmitting ultrasonic waves according to the excitation electric signals or converting received ultrasonic waves into electric signals, so that each transducer array element can be used for realizing the mutual conversion of electric pulse signals and ultrasonic waves, thereby realizing the transmission of ultrasonic waves to tissues of a target area of a tested object, and also can be used for receiving ultrasonic wave echoes reflected by the tissues. In the ultrasonic detection, the transmission sequence and the receiving sequence can control which transducer array elements are used for transmitting ultrasonic waves and which transducer array elements are used for receiving ultrasonic waves, or control the transducer array elements to be used for transmitting ultrasonic waves or receiving echo waves in a time slot mode. The transducer array elements participating in ultrasonic wave transmission can be excited by the electric signals at the same time, so that ultrasonic waves are transmitted at the same time; alternatively, the transducer elements involved in the transmission of the ultrasound beam may also be excited by several electrical signals with a certain time interval, so as to continuously transmit ultrasound waves with a certain time interval.
During ultrasound imaging, the transmit circuit 112 transmits the delay-focused transmit pulse to the ultrasound probe 110 through the transmit/receive selection switch 120. The ultrasonic probe 110 is excited by the emission pulse to emit an ultrasonic beam to the tissue of the target region of the object to be measured, receives the ultrasonic echo with the tissue information reflected from the tissue of the target region after a certain delay, and reconverts the ultrasonic echo into an electrical signal. The receiving circuit 114 receives the electrical signals converted by the ultrasonic probe 110, obtains ultrasonic echo signals, and sends the ultrasonic echo signals to the beam forming module 122, and the beam forming module 122 performs focusing delay, weighting, channel summation and other processes on the ultrasonic echo data, and then sends the ultrasonic echo signals to the processor 116. The processor 116 performs signal detection, signal enhancement, data conversion, logarithmic compression, etc. on the ultrasonic echo signals to form an ultrasonic image. The ultrasound images obtained by the processor 116 may be displayed on the display 118 or may be stored in the memory 124.
Alternatively, the processor 116 may be implemented as software, hardware, firmware, or any combination thereof, and may use single or multiple application specific integrated circuits (Application Specific Integrated Circuit, ASIC), single or multiple general purpose integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or any combination of the foregoing circuits and/or devices, or other suitable circuits or devices. Also, the processor 116 may control other components in the ultrasound imaging system 100 to perform the respective steps of the methods in the various embodiments in this specification.
The display 118 is connected with the processor 116, and the display 118 may be a touch display screen, a liquid crystal display screen, or the like; alternatively, the display 118 may be a stand-alone display such as a liquid crystal display, television, or the like that is independent of the ultrasound imaging system 100; alternatively, the display 118 may be a display screen of an electronic device such as a smart phone, tablet, or the like. Wherein the number of displays 118 may be one or more. For example, the display 118 may include a main screen for primarily displaying ultrasound images and a touch screen for primarily human-machine interaction.
The display 118 may display the ultrasound image obtained by the processor 116. In addition, the display 118 may provide a graphical interface for human-computer interaction while displaying the ultrasonic image, one or more controlled objects are provided on the graphical interface, and the user is provided with an operation instruction input by using the human-computer interaction device to control the controlled objects, so as to execute corresponding control operation. For example, icons are displayed on a graphical interface that can be manipulated using a human-machine interaction device to perform specific functions, such as drawing a region of interest box on an ultrasound image, etc.
Optionally, the ultrasound imaging system 100 may further include other human-machine interaction devices in addition to the display 118, which are coupled to the processor 116, for example, the processor 116 may be coupled to the human-machine interaction device through an external input/output port, which may be a wireless communication module, a wired communication module, or a combination of both. The external input/output ports may also be implemented based on USB, bus protocols such as CAN, and/or wired network protocols, among others.
The man-machine interaction device may include an input device for detecting input information of a user, and the input information may be, for example, a control instruction for an ultrasonic wave transmission/reception timing, an operation input instruction for drawing a point, a line, a frame, or the like on an ultrasonic image, or may further include other instruction types. The input device may include one or more of a keyboard, mouse, scroll wheel, trackball, mobile input device (e.g., a mobile device with a touch display, a cell phone, etc.), multi-function knob, etc. The human-machine interaction means may also comprise an output device such as a printer.
The ultrasound imaging system 100 may also include a memory 124 for storing instructions for execution by the processor, storing received ultrasound echoes, storing ultrasound images, and so forth. The memory may be a flash memory card, solid state memory, hard disk, or the like. Which may be volatile memory and/or nonvolatile memory, removable memory and/or non-removable memory, and the like.
It should be understood that the components included in the ultrasound imaging system 100 shown in fig. 1 are illustrative only and may include more or fewer components. The present application is not limited thereto.
Next, an ultrasound imaging method of a fetal skull according to an embodiment of the present application will be described with reference to fig. 2. Fig. 2 is a schematic flow chart of an ultrasound imaging method 200 of a fetal skull in accordance with an embodiment of the present application.
As shown in fig. 2, an ultrasound imaging method 200 of a fetal skull in one embodiment of the present application includes the steps of:
in step S210, the processor controls the ultrasonic probe to emit ultrasonic waves to the cranium of the fetus to be tested, and receives the echoes of the ultrasonic waves to obtain echo signals of the ultrasonic waves;
in step S220, the processor obtains three-dimensional ultrasound data of the cranium of the fetus to be tested based on the echo signals of the ultrasound waves;
in step S230, the processor determines a target position based on the three-dimensional ultrasound data, the target position being a position that places a skull region in the three-dimensional ultrasound data at a rendering angle;
at step S240, the processor rotates the three-dimensional ultrasound data to the target position based on the target position;
In step S250, the processor determines a skull region in the three-dimensional ultrasound data;
in step S260, the processor renders the skull region in the rotated three-dimensional ultrasound data to obtain a rendered image, and controls a display to display the rendered image.
The ultrasonic imaging method 200 of the fetal skull can automatically image the fetal skull, greatly reduces manual operation of a user, and improves efficiency and accuracy of fetal skull examination.
Illustratively, in step S210, an ultrasound scan may be performed based on the ultrasound imaging system 100 shown in fig. 1. The user moves the ultrasonic probe 110 to select a suitable position and angle to perform three-dimensional ultrasonic scanning on the cranium of the fetus to be tested, wherein the three-dimensional ultrasonic scanning can be directed at the sagittal plane, the coronal plane or the cross section of the cranium of the fetus. During the scan, the transmit circuit 112 transmits a set of delay-focused transmit pulses to the ultrasound probe 110 to excite the ultrasound probe 110 to transmit ultrasound along a two-dimensional scan plane to the cranium of the fetus to be tested. The receiving circuit 114 controls the ultrasonic probe 110 to receive ultrasonic echo reflected by the cranium of the fetus to be tested, convert the ultrasonic echo into an electric signal, and the beam synthesis module 112 performs corresponding delay and weighted summation processing on ultrasonic echo signals obtained by multiple transmission and reception to realize beam synthesis, and then sends the ultrasonic echo signals to the processor 116 for subsequent signal processing.
In step S220, the processor of the ultrasound imaging system obtains three-dimensional ultrasound data of the cranium of the fetus to be tested based on the echo signals of the received ultrasound waves. Illustratively, with continued reference to fig. 1, the processor 116 may integrate the three-dimensional spatial relationship of the ultrasound echo signals scanned by the ultrasound probe 110 in a series of scan planes to achieve a scan of the fetal brain in three-dimensional space and a reconstruction of three-dimensional ultrasound data. Finally, after partial or all image post-processing steps such as denoising, smoothing, enhancing and the like, the three-dimensional ultrasonic data of the fetal cranium is obtained.
In step S230, the processor determines a target position based on the three-dimensional ultrasound data, the target position being a position that places the skull region in the three-dimensional ultrasound data at a rendering angle. After determining the target position, in step S240, the processor rotates the three-dimensional ultrasound data to the target position based on the target position determined in step S230, i.e., automatic alignment of the three-dimensional ultrasound data is achieved by the processor. The three-dimensional ultrasonic data is rotated to the target azimuth, the skull region in the three-dimensional ultrasonic data can be rotated to the rendering angle, so that a rendering image containing the skull region can be obtained finally, the user does not need to manually rotate, and the efficiency and the accuracy of fetal skull ultrasonic inspection are improved.
Wherein the target position may be a position that brings a region of the feature of interest of the skull in the three-dimensional ultrasound data under a rendering angle. Features of interest may include craniotomies, fontanels, and the like that require careful examination by the user. Of course, the feature of interest may also be other features of the fetal skull that need to be observed. Since different features of interest may be located at different orientations of the fetal skull, the target orientation may include a plurality, e.g., frontal, lateral, overhead, etc., of the fetal skull, and different target orientations may correspond to different features of interest. For example, a plurality of candidate target orientations may be preset for the user to select, and the processor determines the target orientation according to the received selection instruction, and performs automatic ultrasonic imaging on the skull of the fetus to be tested under the target orientation, so that the user can observe the feature structure of interest corresponding to the target orientation.
As an alternative implementation, determining the target bearing may include: detecting the region of the target feature structure in the three-dimensional ultrasonic data, determining the angle of rotation of the three-dimensional ultrasonic data according to the position of the detected region of the target feature structure, or determining the angle of rotation of the three-dimensional ultrasonic data according to the relative position relation between the regions of at least two target feature structures. The target feature comprises at least one of: the midline of the brain, thalamus, corpus callosum, eyeball, brainstem, cerebellum, fontanel, craniocerebral suture and skull, and the target feature may also be fetal craniocerebral or other landmark features of the skull.
Wherein, a conventional target detection method or a machine learning method can be adopted to detect the target feature structure in the three-dimensional ultrasonic data. When the target characteristic structure is detected, the target characteristic structure can be detected in a plurality of two-dimensional tangential planes of the three-dimensional ultrasonic data, and the detection results of the target characteristic structure on the two-dimensional tangential planes are synthesized to obtain a three-dimensional detection result of the target characteristic structure in the three-dimensional ultrasonic data; the three-dimensional ultrasonic data can be directly detected in three dimensions to obtain a three-dimensional detection result of the target characteristic structure.
For example, a conventional object detection method may include three steps of region selection, feature extraction, and classification. Specifically, the region selection refers to selecting candidate target regions based on a method such as a sliding window; the feature extraction means feature extraction of a candidate target region, and the extracted features are, for example, SIFT (scale invariant feature transform), HOG (histogram of directional gradients), and the like. The classification refers to classifying the candidate target area by using a classifier to determine whether the current candidate target area comprises a target feature structure, wherein the classifier can adopt a KNN (K-nearest neighbor algorithm), an SVM (support vector machine), a random forest and other types of classifiers. Conventional target detection methods may also include pixel clustering, edge segmentation, graph cut, or threshold-based image segmentation algorithms, etc.
The detection of the target feature structure in the three-dimensional ultrasound data based on the machine learning method requires the pre-construction of a three-dimensional ultrasound database for each target feature structure, wherein each three-dimensional ultrasound data marks a position corresponding to the target feature structure, and then learning an optimal mapping function based on the three-dimensional ultrasound database for mapping from the three-dimensional ultrasound data to the target feature structure. The machine learning method may include the following methods, which may be implemented alone or in combination with each other.
The first alternative machine learning method is a sliding window based method. Specifically, firstly, feature extraction is performed on the region in the sliding window, and the extracted features can be the features of traditional PCA (principal component analysis), LDA (linear discriminant analysis), harr features, textures and the like, or can be the deep neural network. And then classifying by using the trained classifier, and determining whether the current window comprises the target feature structure.
The second alternative machine learning method is a Bounding-Box based deep learning method. Firstly, a network is constructed by stacking a convolution layer and a full connection layer, characteristic learning and parameter regression are performed through the network based on a constructed three-dimensional ultrasonic database, training samples in the three-dimensional ultrasonic database are sent into the network constructed in advance, a loss function of the network is optimized for training until the network converges, and in the training process, the network can learn how to identify the position of a target characteristic structure from the three-dimensional ultrasonic data. The specific process of training the machine learning model can be to train the three-dimensional ultrasonic data directly or to decompose the three-dimensional ultrasonic data into a plurality of two-dimensional sections, train the two-dimensional sections respectively and splice the two-dimensional sections into training results of the three-dimensional ultrasonic data.
After the network is trained, the boundary box of the corresponding target feature structure can be directly regressed through the network for three-bit ultrasonic data input into the network, and the category of the target feature structure contained in the boundary box is obtained. Network structures include, but are not limited to, R-CNN, fast-RCNN, SSD, YOLO, and the like.
The third alternative machine learning method is an end-to-end semantic segmentation network method based on deep learning, and the method is similar to the deep learning method based on the bounding box in structure, and is different in that the semantic segmentation network removes the last full-connection layer of the network, and an up-sampling or deconvolution layer is added to enable the input and output sizes to be the same, so that the target feature structure and the corresponding category of the target feature structure in the three-dimensional ultrasonic data of the input network are directly obtained. Illustratively, the network structure of the semantic segmentation network includes, but is not limited to, FCNs, U-Net, mask R-CNNs, and the like.
After the target feature structure in the three-dimensional ultrasonic data is detected, the angle at which the three-dimensional ultrasonic data of the fetal cranium needs to be rotated can be indirectly calculated according to the target feature structure. Wherein the angle at which the three-dimensional ultrasound data needs to be rotated can be determined according to the position of the region of the at least one target feature. For example, the rotation angle required for the three-dimensional ultrasound data to be rotated to the target position may be indirectly determined by calculating the rotation angle required for rotating the target feature from the current position to the target position. Alternatively, the angle at which the three-dimensional ultrasound data needs to be rotated may be determined from the relative positional relationship between the regions of the at least two target features. For example, the angle at which three-dimensional ultrasound data needs to be rotated may be determined based on symmetry between target structures.
Then, the three-dimensional ultrasound data acquired in step S220 is rotated to the target azimuth based on the angle at which the three-dimensional ultrasound data determined from the target feature structure needs to be rotated. Because the ultrasonic data are three-dimensional ultrasonic data, the three dimensions correspond to a rotation angle during rotation, and the three dimensions are rotated by corresponding angles respectively, so that the rotated three-dimensional ultrasonic data can be obtained.
As another alternative implementation manner of rotating the three-dimensional ultrasonic data to the target azimuth, the trained machine learning model may be directly adopted to regress the angle at which the three-dimensional ultrasonic data needs to be rotated, and the three-dimensional ultrasonic data is rotated to the target azimuth according to the angle at which the three-dimensional ultrasonic data obtained based on the machine learning model needs to be rotated.
When a machine learning model is adopted to determine the angle at which the three-dimensional ultrasonic data needs to be rotated, a three-dimensional ultrasonic database of the fetal cranium is required to be constructed in advance and used for training the machine learning model. The three-dimensional ultrasonic database of the fetal cranium comprises a calibration result corresponding to the three-dimensional ultrasonic data of at least one fetal cranium, wherein the calibration result is the angle of the three-dimensional ultrasonic data to be rotated, and the rotation angle can be directly regressed through a trained machine learning model. The machine learning model may employ a deep learning network, including but not limited to VGG, resNet, denseNet, DPN, etc.
Then, the three-dimensional ultrasound data acquired in step S220 is rotated to the target azimuth based on the angle at which the three-dimensional ultrasound data outputted by the machine learning model needs to be rotated. Similarly, the output angle of the machine learning model may be a rotation angle corresponding to three dimensions, and the three dimensions are rotated by corresponding angles respectively, so as to obtain the rotated three-dimensional ultrasonic data.
Then, in step S250, the processor determines a skull region in the three-dimensional ultrasound data of the fetal cranium, so as to determine a final imaging display range of the three-dimensional ultrasound data, and render imaging for the skull can improve the accuracy of ultrasound imaging of the fetal skull, and avoid other tissue regions from affecting the rendering effect of the skull region. Specifically, the processor may determine a skull region in the three-dimensional ultrasound data before rotation and rotate the skull region to obtain a rotated skull region; the skull region may also be determined in the rotated three-dimensional ultrasound data.
Illustratively, the method for determining the skull region comprises the following steps according to the shape and the forming mode of the skull region:
a first alternative method of determining the skull region comprises: the boundary of the skull region is segmented in the three-dimensional ultrasonic data, and the region formed by the boundary of the skull region is taken as the skull region. Specifically, a conventional object detection method or machine learning method similar to that in step S230 may be employed to automatically segment a skull region from three-dimensional ultrasound data, with the segmentation result of the skull region as the determined skull region.
A second alternative method of determining the skull region comprises: a region of interest comprising a skull region is detected in the three-dimensional ultrasound data, and the skull region is determined in the region of interest. The method automatically detects the position and the size of the skull region to set the proper skull region. Specifically, the position of the skull region may be first identified using a conventional target detection method or a machine learning method similar to that in step S230, and the size of the skull region may be further calculated based on the identification result. Finally, a suitable bounding box, e.g. a maximum circumscribing trapezoid box, is set as the region of interest according to the position and size of the skull region, and the skull region is determined in the region of interest.
A third alternative method of determining the skull region comprises: the skull region is determined by setting a CMPR (Curve Multiple Plane Rendering, curved multi-plane rendering) reference line. Specifically, extracting a target two-dimensional section containing a skull region from three-dimensional ultrasonic data; drawing a CMPR reference line on the target two-dimensional section along the skull region, and determining a two-dimensional interested region in the target two-dimensional section according to the CMPR reference line; selecting a three-dimensional region perpendicular to the two-dimensional region of interest in a direction perpendicular to the two-dimensional tangential plane of the target; a skull region is determined in the selected three-dimensional region. The CMPR reference line is a curve drawn from a target two-dimensional section of the three-dimensional ultrasonic data along the skull region, and the CMPR imaging can be performed by selecting a proper thickness in the direction perpendicular to the target two-dimensional section, and the three-dimensional ultrasonic data with the thickness needs to be straightened along the direction of the CMPR reference line during the CMPR imaging. Referring to fig. 3, a CMPR reference line is shown on the left side of fig. 3, and a rendered image based on the CMPR reference line is shown on the right side.
Illustratively, extracting a target two-dimensional slice containing a skull region in three-dimensional ultrasound data includes: randomly intercepting at least one two-dimensional tangent plane in the three-dimensional ultrasonic data, or intercepting at least one two-dimensional tangent plane along a preset direction; a two-dimensional section containing the skull region is determined in at least one two-dimensional section as a target two-dimensional section.
In step S260, the processor renders the skull region in the rotated three-dimensional ultrasound data to obtain a rendered image, and controls the display to display the rendered image. Because of the shielding of the surface layer and middle layer tissue structures, the bone structure of the skull of the fetus cannot be intuitively observed by directly imaging the three-dimensional ultrasonic data, and the bone structure of the skull can be clearly presented by rendering the skull region, so that the user can conveniently check the development condition of the skull. Wherein, the rendering of the skull region may be rendering of only the skull region, for example, three-dimensional ultrasound data outside the skull region may be removed, and rendering of the remaining skull region may be performed. Alternatively, the rendering of the skull region may be rendering of a region containing the skull region, and distinguishing the skull region from other regions by setting a rendering manner during the rendering process to highlight the bony structure of the skull.
Illustratively, the method of rendering the skull region generally includes a volume rendering method and a face rendering method.
The volume rendering method is mainly a ray tracing algorithm, the algorithm emits a plurality of rays passing through three-dimensional ultrasonic data based on a line-of-sight direction, each ray progresses according to a fixed step length, three-dimensional ultrasonic data on a ray path is sampled, the color and the opacity of each sampling point are calculated, the color and the opacity on each ray path are accumulated, and finally the accumulated color value is mapped to each pixel of a 2D image, so that a VR (Volume Rendering) rendering chart can be obtained. Wherein, the three-dimensional rendering mode adopted by volume rendering comprises any one of the following:
a. a Surface imaging mode (Surface mode) that displays mainly object Surface information;
b. a maximum echo mode (Max mode) which mainly displays the object internal maximum value information;
c. a minimum echo mode (Min mode) which mainly displays minimum value information inside the object;
d. an X-Ray mode (X-Ray mode) that mainly displays internal structural information of an object;
e. a shadow imaging mode (Volume Rendering with Global Illumination mode) that displays object surface information based on a global illumination model, capable of simulating real skin texture and shadow effects;
f. Profile mode (Silhouette mode) which displays the internal and external profile information of an object by a semitransparent effect.
Alternatively, the above several three-dimensional rendering modes may be combined with each other.
The main method of surface drawing is divided into two types of methods of 'fault-based contour line (Delaunay)' and 'extraction of equivalent surface (MarchingCube) in voxels', taking MarchingCube as an example, a triangular grid model is built by extracting equivalent surface (namely surface contour) information of tissues or organs in three-dimensional ultrasonic data, namely normal vectors and vertex coordinates of triangular patches, and then three-dimensional rendering is carried out by combining an illumination model (comprising ambient light, scattered light, high light and the like and different light source parameters), wherein the illumination model comprises ambient light, scattered light, high light and the like, and different light source parameters (types, directions, positions and angles) influence the effect of the illumination model to different degrees, so that a rendered image can be obtained.
Further, after obtaining the rendered image, the processor may further control the display to display an identifier characterizing a feature structure corresponding to different regions in the rendered image, so as to facilitate user reference. The identifier displayed by the processor can be the names of the corresponding characteristic structures of different areas, such as a top bone, a bregma and the like; the identification displayed by the processor may also be a symbol or graphic capable of characterizing different features.
In one embodiment, the processor may generate the identification of the feature structures corresponding to the different regions in the rendered image according to the correspondence between the feature structure map of the fetal skull and the rendered image. Referring to fig. 4, shown on the left is a feature map of the fetal skull and on the right is a rendered image. The identification of the feature structure corresponding to each region is displayed in the feature structure diagram of the fetal skull, and the processor can generate the identification corresponding to the same feature structure at the corresponding position in the rendered image according to the identification in the feature structure diagram of the fetal skull. In this embodiment, the processor may not divide the specific area of each feature in the rendered image.
In another embodiment, the processor may first determine regions of different features in the rendered image, and generate an identification of the feature corresponding to the region of each feature from the regions of different features.
Illustratively, determining regions in the rendered image corresponding to different features includes: extracting image features of the rendered image; the image features are classified to divide areas corresponding to different feature structures in the rendered image. Specifically, feature extraction and feature classification may be performed using a conventional object detection method or a machine learning method similar to that in step S230 to determine regions corresponding to different feature structures in the rendered image.
In some embodiments, the processor may control the display to display the regions corresponding to the different features differently after determining the regions corresponding to the different features in the rendered image. Optionally, this embodiment may be implemented separately, that is, the processor only controls the display to display the areas corresponding to the different feature structures in a differentiated manner, so that the user can distinguish the different areas without displaying the identifiers corresponding to the different feature structures; this embodiment may also be combined with the embodiments described above, i.e. the processor may control the display to display the identification of the feature structures corresponding to the different areas in the rendered image while controlling the display to display the different areas differently.
Illustratively, the distinguishing display of the areas corresponding to the different features includes: the regions corresponding to the different features are displayed in different forms in the rendered image. For example, the processor may control the display to display different feature corresponding regions as different colors, transparencies, brightnesses, and the like. The processor may also control the display to display the boundaries of the different regions in different forms to distinguish between regions corresponding to different features.
In some embodiments, when the processor receives a selection instruction for a region corresponding to a different feature structure in the rendered image, the display is controlled to highlight the region selected by the selection instruction in the rendered image. Forms of highlighting include, but are not limited to, highlighting, blinking, magnifying, etc. The selection instruction of the region corresponding to the different feature structures in the rendered image can be received based on the identifiers corresponding to the different feature structures, namely when the user selects the identifier of a certain feature structure, the region corresponding to the identifier is highlighted; the selection instruction may be based on the rendering image itself, that is, when it is detected that the user selects a certain position of the rendering image, the area to which the position belongs is highlighted. Alternatively, the selection instruction may be received based on a controlled object having a mapping relationship with a different area of the rendered image other than the rendered image.
In summary, the method 200 for ultrasonic imaging of fetal skull according to the embodiment of the present application can realize automatic imaging of fetal skull, greatly reduce manual operation of doctors, and improve efficiency and accuracy of ultrasonic inspection of fetal skull.
The embodiment of the application also provides an ultrasonic imaging system for implementing the ultrasonic imaging method 200 of the fetal skull. The ultrasound imaging system includes an ultrasound probe, a transmit circuit, a receive circuit, a processor, and a display. The transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested; the receiving circuit is used for controlling the ultrasonic probe to receive the ultrasonic wave echo so as to obtain an ultrasonic echo signal; the processor is configured to perform the steps of the ultrasound imaging method 200 of the fetal skull as described above, and specifically includes: the processor controls the ultrasonic probe to emit ultrasonic waves to the cranium of the fetus to be tested and receives the echo of the ultrasonic waves so as to obtain echo signals of the ultrasonic waves; the processor obtains three-dimensional ultrasonic data of the cranium of the fetus to be tested based on the echo signals of the ultrasonic waves; the processor determines a target position based on the three-dimensional ultrasonic data, wherein the target position is a position for enabling a skull region in the three-dimensional ultrasonic data to be under a rendering angle; the processor rotates the three-dimensional ultrasound data to the target position based on the target position; the processor determining a skull region in the three-dimensional ultrasound data; and the processor renders the skull region in the rotated three-dimensional ultrasonic data to obtain a rendered image, and controls a display to display the rendered image.
Referring back to fig. 1, the ultrasound imaging system may be implemented as the ultrasound imaging system 100 shown in fig. 1, the ultrasound imaging system 100 may include an ultrasound probe 110, a transmitting circuit 112, a receiving circuit 114, a processor 116, and a display 118, and optionally, the ultrasound imaging system 100 may further include a transmit/receive selection switch 120 and a beam forming module 122, where the transmitting circuit 112 and the receiving circuit 114 may be connected to the ultrasound probe 110 through the transmit/receive selection switch 120, and the related descriptions of the respective components may be referred to the related descriptions above and are not repeated herein.
Only the main functions of the components of the ultrasound imaging system are described above, see for more details on the ultrasound imaging method 200 of the fetal skull. The ultrasonic imaging system can automatically image the fetal skull, greatly reduces manual operation of doctors, and improves efficiency and accuracy of fetal skull ultrasonic inspection.
Next, an ultrasound imaging method of a fetal skull according to another embodiment of the present application will be described with reference to fig. 5. Fig. 5 is a schematic flow chart of an ultrasound imaging method 500 of a fetal skull in accordance with an embodiment of the present application.
As shown in fig. 5, the ultrasound imaging method 500 of the fetal skull comprises the steps of:
In step S510, the processor obtains three-dimensional ultrasound data of the cranium of the fetus to be tested;
at step S520, the processor determines a target position based on the three-dimensional ultrasound data;
at step S530, the processor rotates the three-dimensional ultrasound data to the target position based on the target position;
in step S540, the processor determines a skull region in the three-dimensional ultrasound data;
in step S550, the processor renders the skull region in the rotated three-dimensional ultrasound data to obtain a rendered image, and controls a display to display the rendered image.
The main difference between the ultrasound imaging method 500 of the fetal skull of the present embodiment and the ultrasound imaging method 200 above is that: the ultrasound imaging method 500 of the fetal skull of the present embodiment does not limit the specific manner in which the processor obtains three-dimensional ultrasound data of the fetal skull to be tested. For example, the processor may perform a three-dimensional ultrasound scan in real time using a method as described in the ultrasound imaging method 200 of the fetal cranium to obtain three-dimensional ultrasound data of the fetal cranium to be tested; alternatively, the processor may also extract pre-acquired three-dimensional ultrasound data of the fetal cranium to be tested from the memory, and so on. Otherwise, the ultrasound imaging method 500 of the fetal skull according to the embodiments of the present application is substantially similar to the ultrasound imaging method 200 of the fetal skull described with reference to fig. 2, and the same details are not repeated here for the sake of brevity.
The embodiment of the application also provides an ultrasonic imaging system for implementing the ultrasonic imaging method 500 of the fetal skull. The ultrasound imaging system includes an ultrasound probe, a transmit circuit, a receive circuit, a processor, and a display. The transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested; the receiving circuit is used for controlling the ultrasonic probe to receive the ultrasonic wave echo so as to obtain an ultrasonic echo signal; the processor is configured to perform the steps of the fetal skull ultrasound imaging method 500 as described above, and specifically includes: the processor obtains three-dimensional ultrasonic data of the cranium of the fetus to be tested; the processor determining a target position based on the three-dimensional ultrasound data; the processor rotates the three-dimensional ultrasound data to a target orientation; the processor determines a skull region in the three-dimensional ultrasound data; the processor renders the skull region in the rotated three-dimensional ultrasonic data to obtain a rendered image, and controls the display to display the rendered image; the display is used for displaying the rendered image obtained by the processor.
Referring back to fig. 1, the ultrasound imaging system may be implemented as the ultrasound imaging system 100 shown in fig. 1, the ultrasound imaging system 100 may include an ultrasound probe 110, a transmitting circuit 112, a receiving circuit 114, a processor 116, and a display 118, and optionally, the ultrasound imaging system 100 may further include a transmit/receive selection switch 120 and a beam forming module 122, where the transmitting circuit 112 and the receiving circuit 114 may be connected to the ultrasound probe 110 through the transmit/receive selection switch 120, and the related descriptions of the respective components may be referred to the related descriptions above and will not be repeated herein.
Only the main functions of the components of the ultrasound imaging system are described above, see the relevant description above for more details. The ultrasonic imaging method 500 and the ultrasonic imaging system for the fetal skull can automatically image the fetal skull, greatly reduce manual operation of doctors, and improve the efficiency and the accuracy of ultrasonic inspection of the fetal skull.
Next, an ultrasound imaging method according to another embodiment of the present application will be described with reference to fig. 6. Fig. 6 is a schematic flow chart of an ultrasound imaging method 600 of a fetal skull in accordance with an embodiment of the present application.
As shown in fig. 6, an ultrasound imaging method 600 of one embodiment of the present application includes the steps of:
in step S610, three-dimensional ultrasound data of the cranium of the fetus to be tested is obtained;
in step S620, a skull region is determined in the three-dimensional ultrasound data based on the skull image features of the fetus;
in step S630, rendering the skull region to obtain a rendered image;
in step S640, the rendered image is displayed.
The main difference between the ultrasound imaging method 600 of the fetal skull according to the embodiment of the present application and the ultrasound imaging method 200 of the fetal skull described with reference to fig. 2 and the ultrasound imaging method 500 of the fetal skull described with reference to fig. 5 is that the ultrasound imaging method 600 of the fetal skull does not rotate the three-dimensional ultrasound data. In some cases, rendering imaging of the skull may be achieved without rotating the three-dimensional ultrasound data. Alternatively, the adjustment of the three-dimensional ultrasound data orientation may be achieved by the user by adjusting the probe angle during the ultrasound scanning process, or may be manually rotated by the user after the three-dimensional ultrasound data is acquired.
In one embodiment, determining the skull region in the three-dimensional ultrasound data comprises: dividing the boundary of the skull region in the three-dimensional ultrasonic data, and taking the region formed by the boundary of the skull region as the skull region; alternatively, a region of interest containing a skull region is detected in the three-dimensional ultrasound data, and the skull region is determined in the region of interest. For specific details of this approach reference may be made to the relevant description in step S250 of the ultrasound imaging method 200 of the fetal skull.
In another embodiment, determining the skull region in the three-dimensional ultrasound data comprises: extracting a target two-dimensional section containing a skull region from the three-dimensional ultrasonic data; drawing a curve multi-plane rendering reference line along the skull region on the target two-dimensional tangent plane, and determining a two-dimensional interested region in the target two-dimensional tangent plane according to the curve multi-plane rendering reference line; selecting a three-dimensional region perpendicular to the two-dimensional region of interest in a direction perpendicular to the two-dimensional tangential plane of the target; a skull region is determined in the selected three-dimensional region. For specific details of this approach reference may be made to the relevant description in step S250 of the ultrasound imaging method 200 of the fetal skull.
In step S640, an identifier characterizing the feature structures corresponding to different regions in the rendered image may also be displayed. The displayed identifiers can be names of the feature structures corresponding to the different areas, and the displayed identifiers can also be symbols or graphs capable of representing the different feature structures.
For example, the identification of the feature structures corresponding to different areas in the rendered image may be generated according to the correspondence between the feature structure map of the fetal skull and the rendered image. The identification of the region corresponding to each feature structure is displayed in the feature structure diagram, and the identification corresponding to the same feature structure can be generated at the corresponding position in the rendered image according to the identification in the feature structure diagram of the fetal skull.
Alternatively, regions of different features in the rendered image may be determined, and an identification of the feature corresponding to the region of each feature may be generated from the regions of different features. For example, image features of the rendered image may be extracted and classified to partition regions corresponding to different feature structures in the rendered image. Specifically, the feature extraction and feature classification may be performed by using a conventional object detection method or a machine learning method to determine regions corresponding to different feature structures in the rendered image.
After determining the areas corresponding to different feature structures in the rendered image, the areas corresponding to the different feature structures can be displayed in a distinguishing mode. For example, the different feature structure corresponding regions may be displayed as different colors, transparency, brightness, or the like. Alternatively, boundaries of different regions may be displayed in different forms to distinguish regions corresponding to different features.
In some embodiments, when a selection instruction is received for a region corresponding to a different feature structure in a rendered image, the region selected by the selection instruction may be highlighted. Forms of highlighting include, but are not limited to, highlighting, blinking, magnifying, etc. The selection instruction of the region corresponding to the different feature structures in the rendered image can be received based on the identifiers corresponding to the different feature structures, namely when the user selects the identifier of a certain feature structure, the region corresponding to the identifier is highlighted; the selection instruction may be based on the rendering image itself, that is, when it is detected that the user selects a certain position of the rendering image, the area to which the position belongs is highlighted. Alternatively, the selection instruction may be received based on a controlled object having a mapping relationship with a different area of the rendered image other than the rendered image.
The ultrasonic imaging method 600 of the embodiment of the application can realize automatic imaging of the fetal skull, greatly reduce manual operation of doctors and improve the efficiency and accuracy of fetal skull ultrasonic examination.
The embodiment of the application also provides an ultrasonic imaging system for implementing the ultrasonic imaging method 600. The ultrasound imaging system includes an ultrasound probe, a transmit circuit, a receive circuit, a processor, and a display. The transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested; the receiving circuit is used for controlling the ultrasonic probe to receive the ultrasonic wave echo so as to obtain an ultrasonic echo signal; the processor is configured to perform the steps of the ultrasound imaging method 600 as described above, and specifically includes: acquiring three-dimensional ultrasonic data of the cranium of a fetus to be tested; determining a skull region in the three-dimensional ultrasound data based on skull image features of the fetus; rendering the skull region to obtain a rendered image; in step S640, the rendered image is displayed.
Referring back to fig. 1, the ultrasound imaging system may be implemented as the ultrasound imaging system 100 shown in fig. 1, the ultrasound imaging system 100 may include an ultrasound probe 110, a transmitting circuit 112, a receiving circuit 114, a processor 116, and a display 118, and optionally, the ultrasound imaging system 100 may further include a transmit/receive selection switch 120 and a beam forming module 122, where the transmitting circuit 112 and the receiving circuit 114 may be connected to the ultrasound probe 110 through the transmit/receive selection switch 120, and the related descriptions of the respective components may be referred to the related descriptions above and are not repeated herein.
Only the main functions of the various components of the ultrasound imaging system 100 are described above, see the relevant description above for further details.
Furthermore, according to an embodiment of the present application, there is also provided a computer storage medium on which program instructions are stored, which program instructions, when being executed by a computer or a processor, are for performing the respective steps of the fetal skull ultrasound imaging method 200, the fetal skull ultrasound imaging method 500 or the fetal skull ultrasound imaging method 600 of the embodiments of the present application. The storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
Furthermore, according to an embodiment of the present application, there is also provided a computer program, which may be stored on a cloud or local storage medium. Which when executed by a computer or processor is adapted to carry out the respective steps of the ultrasound imaging method of a fetal skull of an embodiment of the present application.
Based on the above description, the ultrasonic imaging method and the ultrasonic imaging system of the fetal skull according to the embodiments of the present application can realize automatic imaging of the fetal skull, greatly reduce manual operation of doctors, and improve efficiency and accuracy of fetal skull ultrasonic inspection.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various inventive aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the application. However, the method of this application should not be construed to reflect the following intent: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are intended to be within the scope of the present application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as device programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are intended to be covered by the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (23)

  1. A method of ultrasound imaging of a fetal skull, the method comprising:
    The processor controls the ultrasonic probe to emit ultrasonic waves to the cranium of the fetus to be tested and receives the echo of the ultrasonic waves so as to obtain echo signals of the ultrasonic waves;
    the processor obtains three-dimensional ultrasonic data of the cranium of the fetus to be tested based on the echo signals of the ultrasonic waves;
    the processor determines a target position based on the three-dimensional ultrasonic data, wherein the target position is a position for enabling a skull region in the three-dimensional ultrasonic data to face a rendering direction;
    the processor rotates the three-dimensional ultrasound data to the target position based on the target position;
    the processor determining a skull region in the three-dimensional ultrasound data;
    and the processor renders the skull region in the rotated three-dimensional ultrasonic data to obtain a rendered image, and controls a display to display the rendered image.
  2. The method as recited in claim 1, further comprising: the processor controls the display to display an identification characterizing a feature structure corresponding to different regions in the rendered image.
  3. The method as recited in claim 2, further comprising:
    the processor generates identifications of characteristic structures corresponding to different areas in the rendered image according to the corresponding relation between the characteristic structure diagram of the fetal skull and the rendered image; or alternatively, the process may be performed,
    The processor determines areas of different feature structures in the rendered image, and generates an identification of the feature structure corresponding to the area of each feature structure according to the areas of the different feature structures.
  4. The method as recited in claim 1, further comprising: and the processor determines the areas corresponding to different characteristic structures in the rendered image and controls the display to display the areas corresponding to the different characteristic structures in a distinguishing mode.
  5. The method of claim 4, wherein the distinguishing the regions corresponding to the different features comprises: and displaying the areas corresponding to the different feature structures as different colors in the rendered image.
  6. The method of any of claims 3-5, wherein determining regions of the rendered image corresponding to different features comprises:
    extracting image features of the rendered image;
    and classifying the image features to divide areas corresponding to different feature structures in the rendered image.
  7. The method as recited in claim 4, further comprising: and when the processor receives a selection instruction of the area corresponding to different characteristic structures in the rendered image, controlling the display to highlight the area selected by the selection instruction in the rendered image.
  8. The method of claim 1, wherein determining a target position based on the three-dimensional ultrasound data comprises:
    detecting a region of a target feature in the three-dimensional ultrasound data, the target feature comprising at least one of: midline of brain, thalamus, corpus callosum, eyeball, brainstem, cerebellum, fontanel, craniocerebral suture and skull;
    and determining the angle to be rotated of the three-dimensional ultrasonic data according to the position of the region of the at least one target feature structure, or determining the angle to be rotated of the three-dimensional ultrasonic data according to the relative position relation between the regions of the at least two target feature structures.
  9. The method of claim 1, wherein determining a target direction based on the three-dimensional ultrasound data comprises:
    and (3) regressing the angle required to rotate by the three-dimensional ultrasonic data by using the trained machine learning model.
  10. The method of claim 1, wherein determining a skull region in the three-dimensional ultrasound data comprises:
    dividing the boundary of a skull region in the three-dimensional ultrasonic data, and taking a region formed by the boundary of the skull region as the skull region; or alternatively, the process may be performed,
    A region of interest comprising a skull region is detected in the three-dimensional ultrasound data, the skull region being determined in the region of interest.
  11. The method of claim 1, wherein determining a skull region in the three-dimensional ultrasound data comprises:
    extracting a target two-dimensional section containing a skull region from the three-dimensional ultrasonic data;
    drawing a curve multi-plane rendering reference line along the skull region on the target two-dimensional section, and determining a two-dimensional interested region in the target two-dimensional section according to the curve multi-plane rendering reference line;
    selecting a three-dimensional region perpendicular to the two-dimensional region of interest in a direction perpendicular to the target two-dimensional tangential plane;
    determining the skull region in the selected three-dimensional region.
  12. The method of claim 11, wherein the extracting a target two-dimensional slice containing a skull region in the three-dimensional ultrasound data comprises:
    randomly intercepting at least one two-dimensional tangent plane in the three-dimensional ultrasonic data or intercepting at least one two-dimensional tangent plane along a preset direction;
    a two-dimensional section including the skull region is determined in the at least one two-dimensional section as the target two-dimensional section.
  13. A method of ultrasound imaging of a fetal skull, the method comprising:
    the processor obtains three-dimensional ultrasonic data of the cranium of the fetus to be tested;
    the processor determining a target position based on the three-dimensional ultrasound data;
    the processor rotates the three-dimensional ultrasound data to the target position based on the target position;
    the processor determining a skull region in the three-dimensional ultrasound data;
    and the processor renders the skull region in the rotated three-dimensional ultrasonic data to obtain a rendered image, and controls a display to display the rendered image.
  14. A method of ultrasound imaging, the method comprising:
    acquiring three-dimensional ultrasonic data of the cranium of a fetus to be tested;
    determining a skull region in the three-dimensional ultrasound data based on skull image features of the fetus;
    rendering the skull region to obtain a rendered image;
    and displaying the rendered image.
  15. The method as recited in claim 14, further comprising:
    and displaying the identification representing the characteristic structure corresponding to different areas in the rendered image.
  16. The method as recited in claim 15, further comprising:
    Generating identifications of characteristic structures corresponding to different areas in the rendered image according to the corresponding relation between the characteristic structure diagram of the fetal skull and the rendered image; or alternatively, the process may be performed,
    and determining areas of different feature structures in the rendered image, and generating the identification of the feature structure corresponding to the area of each feature structure according to the areas of the different feature structures.
  17. The method as recited in claim 14, further comprising:
    and determining the areas corresponding to different characteristic structures in the rendered image, and distinguishing and displaying the areas corresponding to the different characteristic structures.
  18. The method as recited in claim 17, further comprising:
    when a selection instruction of the areas corresponding to different feature structures in the rendered image is received, highlighting the areas selected by the selection instruction in the rendered image.
  19. The method of claim 14, wherein determining a skull region in the three-dimensional ultrasound data comprises:
    dividing the boundary of a skull region in the three-dimensional ultrasonic data, and taking a region formed by the boundary of the skull region as a skull region; or alternatively, the process may be performed,
    a region of interest comprising a skull region is detected in the three-dimensional ultrasound data, the skull region being determined in the region of interest.
  20. The method of claim 14, wherein determining a skull region in the three-dimensional ultrasound data comprises:
    extracting a target two-dimensional section containing a skull region from the three-dimensional ultrasonic data;
    drawing a curve multi-plane rendering reference line along the skull region on the target two-dimensional section, and determining a two-dimensional interested region in the target two-dimensional section according to the curve multi-plane rendering reference line;
    selecting a three-dimensional region perpendicular to the two-dimensional region of interest in a direction perpendicular to the target two-dimensional tangential plane;
    determining the skull region in the selected three-dimensional region.
  21. An ultrasound imaging system, comprising:
    an ultrasonic probe;
    the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested;
    a receiving circuit for controlling the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
    a processor for performing the steps of the ultrasound imaging method of a fetal skull of any of claims 1-12;
    and the display is used for displaying the rendered image obtained by the processor.
  22. An ultrasound imaging system, comprising:
    An ultrasonic probe;
    the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested;
    a receiving circuit for controlling the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
    a processor for performing the steps of the ultrasound imaging method of the fetal skull of claim 13;
    and the display is used for displaying the rendered image obtained by the processor.
  23. An ultrasound imaging system, comprising:
    an ultrasonic probe;
    the transmitting circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to the cranium of the fetus to be tested;
    a receiving circuit for controlling the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
    a processor for performing the steps of the ultrasound imaging method of a fetal skull of any of claims 14-20;
    and the display is used for displaying the rendered image obtained by the processor.
CN202080107457.6A 2020-12-25 2020-12-25 Ultrasonic imaging method and ultrasonic imaging system for fetal skull Pending CN116568223A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/139558 WO2022134049A1 (en) 2020-12-25 2020-12-25 Ultrasonic imaging method and ultrasonic imaging system for fetal skull

Publications (1)

Publication Number Publication Date
CN116568223A true CN116568223A (en) 2023-08-08

Family

ID=82157275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080107457.6A Pending CN116568223A (en) 2020-12-25 2020-12-25 Ultrasonic imaging method and ultrasonic imaging system for fetal skull

Country Status (2)

Country Link
CN (1) CN116568223A (en)
WO (1) WO2022134049A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070249935A1 (en) * 2006-04-20 2007-10-25 General Electric Company System and method for automatically obtaining ultrasound image planes based on patient specific information
JP6532893B2 (en) * 2014-05-09 2019-06-19 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Imaging system and method for positioning a 3D ultrasound volume in a desired direction
CN104545919B (en) * 2014-12-31 2017-05-10 中国科学院深圳先进技术研究院 Ultrasonic transcranial focusing method
WO2016176863A1 (en) * 2015-05-07 2016-11-10 深圳迈瑞生物医疗电子股份有限公司 Three-dimensional ultrasound imaging method and device
KR20200094465A (en) * 2019-01-30 2020-08-07 삼성메디슨 주식회사 Ultrasound imaging apparatus and method for ultrasound imaging

Also Published As

Publication number Publication date
WO2022134049A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
CN107798682B (en) Image segmentation system, method, apparatus and computer-readable storage medium
US9277902B2 (en) Method and system for lesion detection in ultrasound images
US9773325B2 (en) Medical imaging data processing apparatus and method
CN112672691B (en) Ultrasonic imaging method and equipment
KR20130023735A (en) Method and apparatus for generating organ medel image
US20230210501A1 (en) Ultrasound contrast imaging method and device and storage medium
WO2024093911A1 (en) Ultrasonic imaging method and ultrasonic device
CN112998755A (en) Method for automatic measurement of anatomical structures and ultrasound imaging system
CN116171131A (en) Ultrasonic imaging method and ultrasonic imaging system for early pregnancy fetus
US11484286B2 (en) Ultrasound evaluation of anatomical features
CN116568223A (en) Ultrasonic imaging method and ultrasonic imaging system for fetal skull
CN111383323B (en) Ultrasonic imaging method and system and ultrasonic image processing method and system
CN113229850A (en) Ultrasonic pelvic floor imaging method and ultrasonic imaging system
CN115813433A (en) Follicle measuring method based on two-dimensional ultrasonic imaging and ultrasonic imaging system
CN114699106A (en) Ultrasonic image processing method and equipment
CN116322521A (en) Ultrasonic imaging method and ultrasonic imaging system for midnight pregnancy fetus
CN111403007A (en) Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium
CN114503166A (en) Method and system for measuring three-dimensional volume data, medical instrument, and storage medium
CN113974688B (en) Ultrasonic imaging method and ultrasonic imaging system
CN116327237A (en) Ultrasonic imaging system and method, ultrasonic image processing system and method
CN115778435A (en) Ultrasonic imaging method and ultrasonic imaging system for fetal face
CN114642451A (en) Ultrasonic imaging device
CN117982169A (en) Method for determining endometrium thickness and ultrasonic equipment
CN116211350A (en) Ultrasound contrast imaging method and ultrasound imaging system
CN118056538A (en) Ultrasonic image display method and device, ultrasonic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination