CN111368586A - Ultrasonic imaging method and system - Google Patents

Ultrasonic imaging method and system Download PDF

Info

Publication number
CN111368586A
CN111368586A CN201811591966.1A CN201811591966A CN111368586A CN 111368586 A CN111368586 A CN 111368586A CN 201811591966 A CN201811591966 A CN 201811591966A CN 111368586 A CN111368586 A CN 111368586A
Authority
CN
China
Prior art keywords
sagittal plane
determining
head
dimensional
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811591966.1A
Other languages
Chinese (zh)
Other versions
CN111368586B (en
Inventor
朱磊
邹耀贤
林穆清
何琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN201811591966.1A priority Critical patent/CN111368586B/en
Publication of CN111368586A publication Critical patent/CN111368586A/en
Application granted granted Critical
Publication of CN111368586B publication Critical patent/CN111368586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0808Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The application discloses an ultrasonic imaging method and system. The ultrasonic imaging method comprises the following steps: acquiring three-dimensional volume data of a head of a tested object; determining a three-dimensional target profile of the subject's head from the three-dimensional volume data of the subject's head; determining a median sagittal plane of the subject's head from the three dimensional target profile. According to the embodiment of the application, the three-dimensional target contour of the head of the tested body is detected through the acquired three-dimensional data, and the median sagittal plane of the head of the tested body is detected according to the three-dimensional target contour, so that a user can diagnose according to the median sagittal plane, the diagnosis time is further shortened, and the accuracy of a diagnosis result is improved.

Description

Ultrasonic imaging method and system
Technical Field
The application relates to the technical field of medical ultrasonic imaging, in particular to an ultrasonic imaging method and system.
Background
The ultrasonic instrument is generally used for a doctor to observe internal tissue structures of a human body, and the doctor places an ultrasonic probe on the surface of skin corresponding to a part of the human body to obtain an ultrasonic image of the part. Ultrasound has become one of the main aids for doctors to diagnose because of its characteristics of safety, convenience, no damage, low price, etc.
Obstetrics is one of the most widely used areas of ultrasound diagnosis. In the field, the influence of X-ray and other maternal and fetal is avoided by ultrasound, and the application value of the ultrasound is obviously superior to that of other imaging examination equipment. The ultrasound can not only observe and measure the morphology of the fetus, but also obtain various physiological and case information such as the respiration and the urinary system of the fetus, so as to evaluate the health and the development condition of the fetus.
In ultrasonic diagnosis, it is usually necessary to obtain standard sections of various tissue structures clinically to determine whether these tissue structures are abnormal, and in fetal nervous system examination, the fetal craniocerebral median sagittal plane is a very important section and is a key section for diagnosing corpus callosum abnormality and Dandy-walker syndrome. However, the median sagittal plane of the fetus is difficult to obtain under the conventional two-dimensional ultrasound, and if the median sagittal plane of the fetus can be obtained, the examination time is long, many doctors can only carry out non-visual diagnosis through other sections (such as cerebellar section, thalamic section and the like), and misdiagnosis and missed diagnosis are easy to occur.
In recent years, with the widespread clinical application of three-dimensional ultrasound, the image resolution of three-dimensional ultrasound is also increasing. The volume data acquired by the three-dimensional ultrasound contains all standard sections of a certain tissue structure required by a doctor, including a median sagittal plane, a cerebellar section, a thalamic section, a lateral ventricle section, a coronal plane, a lateral sagittal plane and the like. However, the doctor may need to understand the three-dimensional space to adjust the required standard cut plane in the 3D volume data through the geometric transformation such as manual rotation and translation, and perform the corresponding measurement or diagnosis through the standard cut plane. However, most sonographers are in an irrational background, have a poor understanding of three-dimensional space, are difficult to manually adjust a required section from a volume data, or take a long time to adjust the required section, which results in a great deal of time consumption, and the standard degree of the obtained standard section varies from person to person, so that the measurement or diagnosis results may vary.
Disclosure of Invention
The technical problem to be solved by the present application is to provide an ultrasound imaging method and system capable of performing three-dimensional imaging on a fetal brain and displaying a midsagittal section of the fetal brain.
The first aspect of the present application provides an ultrasound imaging method, comprising:
acquiring three-dimensional volume data of a head of a tested object;
determining a three-dimensional target profile of the subject's head from the three-dimensional volume data of the subject's head;
determining a median sagittal plane of the subject's head from the three dimensional target profile.
A second aspect of the present application provides an ultrasound imaging method comprising:
acquiring three-dimensional volume data of a head of a tested object;
determining the position of at least one pair of symmetrical anatomical structures in the head of the subject from the three-dimensional volume data of the head of the subject;
determining a median sagittal plane of the subject's head from the locations of the at least one pair of symmetric tissue structures.
An ultrasound imaging system provided in a third aspect of the present application includes:
the probe is used for acquiring three-dimensional volume data of the head of a tested object;
a processor coupled to the probe for determining a three-dimensional target profile of the subject's head from the three-dimensional volume data of the subject's head and determining a median sagittal plane of the subject's head from the three-dimensional target profile.
An ultrasound imaging system provided in a fourth aspect of the present application includes:
the probe is used for acquiring three-dimensional volume data of the head of a tested object;
a processor connected to the probe for determining the location of at least one pair of symmetrical anatomical structures in the head of the subject from the three-dimensional volume data of the head of the subject and for determining the median sagittal plane of the head of the subject from the location of the at least one pair of symmetrical anatomical structures.
Compared with the prior art, the embodiment of the application provides an ultrasonic imaging method and system, the three-dimensional target contour of the head of the tested body is detected through the acquired three-dimensional data, or the positions of at least one pair of symmetrical tissue structures on the head of the tested body are detected, and the median sagittal plane of the head of the tested body is detected according to the three-dimensional target contour or the positions of the pair of symmetrical tissue structures, so that a user can diagnose according to the median sagittal plane, further, the diagnosis time is reduced, the accuracy of a diagnosis result is improved, and the user experience is favorably improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a block diagram schematic diagram of an ultrasound imaging system in one embodiment of the present application.
Fig. 2 is a flow chart of steps of an ultrasound imaging method in one embodiment of the present application.
Fig. 3 is a schematic diagram of volume data in an embodiment of the present application.
FIG. 4 is a schematic view of the position of the median sagittal plane in an embodiment of the present application.
Figure 5 is a schematic cross-sectional view in the sagittal plane at plane S in figure 4.
FIG. 6 is a flowchart of the steps of one embodiment of step 202 in FIG. 2.
Fig. 7 is a schematic cross-sectional view of a two-dimensional cut at plane L in fig. 5.
FIG. 8 is a flow chart of the steps of a three-dimensional object contour determination method in one embodiment of the present application.
FIG. 9 is a flow chart of the steps of a three-dimensional object contour determination method in one embodiment of the present application.
FIG. 10 is a flow chart of steps in one embodiment of step 204 of FIG. 2.
FIG. 11 is a flow chart of steps in one embodiment of step 204 of FIG. 2.
FIG. 12 is a flow chart of steps in one embodiment of step 204 of FIG. 2.
FIG. 13 is a schematic illustration of a predetermined number of sagittal plane locations in an embodiment of the present application.
FIG. 14 is a flow chart of the steps of an ultrasound imaging method in one embodiment of the present application.
FIG. 15 is a flowchart of steps in one embodiment of step 506 in FIG. 14.
FIG. 16 is a schematic view of mid-sagittal plane detection in one embodiment of the present application.
Fig. 17 is a flow chart of steps of an ultrasound imaging method in one embodiment of the present application.
FIG. 18 is a flowchart of the steps of one embodiment of step 602 in FIG. 17.
FIG. 19 is a flowchart of the steps of one embodiment of step 602 in FIG. 17.
FIG. 20 is a flowchart of the steps of one embodiment of step 602 in FIG. 17.
FIG. 21 is a flowchart of the steps in one embodiment of step 604 in FIG. 17.
Fig. 22 is a schematic diagram of alignment positions in an embodiment of the present application.
Fig. 23 is a schematic view of the center point of the position connecting line in fig. 22.
FIG. 24 is a flowchart of steps in one embodiment of step 604 in FIG. 17.
FIG. 25 is a schematic representation of a midsagittal plane of a symmetric tissue structure in an embodiment of the present application.
FIG. 26 is a flowchart of steps in one embodiment of step 604 in FIG. 17.
FIG. 27 is a schematic representation of a midsagittal plane of a symmetric tissue structure in an embodiment of the present application.
FIG. 28 is a flowchart of steps in one embodiment of step 604 in FIG. 17.
FIG. 29 is a schematic view of a midsagittal plane of a symmetric tissue structure in an embodiment of the present application.
FIG. 30 is a flow chart of the steps of an ultrasound imaging method in one embodiment of the present application.
Fig. 31 is a block diagram schematic of an ultrasound imaging system in an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The described embodiments are only some embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
It should be noted that for simplicity of description, the following method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may occur in other orders or concurrently depending on the application.
Referring to fig. 1, a block diagram of an ultrasound imaging system 10 in one embodiment of the present application is shown. The ultrasonic imaging system 10 includes a probe 100, a transmitting circuit 102 connected to the probe 100, a receiving circuit 104 connected to the probe 100, a beam forming module 106, a signal processing module 108, a three-dimensional imaging module 110, and a display 112, wherein the receiving circuit 104, the beam forming module 106, the signal processing module 108, the three-dimensional imaging module 110, and the display 112 are electrically connected in sequence.
In this embodiment, the ultrasound imaging system 10 acquires three-dimensional volume data of a subject and performs a detection operation on the acquired three-dimensional volume data to obtain a midsagittal plane of a fetal head, and the midsagittal plane may also be displayed so that a doctor can perform measurement or diagnosis based on the midsagittal plane.
Referring to fig. 2, a flowchart of steps of an ultrasound imaging method according to an embodiment of the present application is shown. The ultrasonic imaging method comprises the following steps:
step 200, three-dimensional volume data of the head of the subject is acquired.
In this embodiment, transmit circuit 102 sends a set of delayed focused pulses to probe 100, which transmits ultrasound waves from probe 100 to a tissue structure of a subject (e.g., a fetal head). After a delay, the probe 100 may receive the ultrasound echo with tissue information reflected from the subject's tissue structure and reconvert the ultrasound echo into an electrical signal. The receiving circuit 104 receives the electrical signals, processes the ultrasonic echoes to obtain ultrasonic echo signals, and transmits the ultrasonic echo signals to the beam forming module 106. The beam forming module 106 performs processing operations such as focusing delay, weighting, and channel summation on the ultrasonic echo signal, and sends the processed ultrasonic echo signal to the signal processing module 108. The ultrasonic echo signals processed by the signal processing module 108 are sent to the three-dimensional imaging module 110, and visual information such as a three-dimensional image of the tissue structure of the subject is obtained after reconstruction processing by the three-dimensional imaging module 110, wherein the visual information such as the three-dimensional image can be displayed by the display 112.
In this embodiment, after the probe 100 completes one scanning cycle, the signal processed by the signal processing module 108 may be a volume of three-dimensional volume data corresponding to a polar coordinate, and the three-dimensional volume data in the polar coordinate is reconstructed by the three-dimensional imaging module 110 to convert the three-dimensional volume data in the polar coordinate into three-dimensional volume data in a rectangular coordinate, so that the ultrasound imaging system 10 may obtain a volume of three-dimensional volume data in the rectangular coordinate. The three-dimensional imaging module 110 may also calculate three-dimensional volume data in the rectangular coordinates using a visualization algorithm to generate visual information and display the visual information on the display 112.
Referring to fig. 3, a schematic diagram of volume data in an embodiment of the present application is shown, in the present embodiment, a volume of volume data may be formed by F image frames with a size of W × H, each image frame includes a plurality of volume data, where W is a width of the image frame, H is a height of the image frame, a width direction of the image frame is a positive X direction, a height direction of the image frame is a positive Y direction, and a direction in which the image frames of a plurality of frames are arranged is a positive Z direction.
Please refer to FIG. 4, which is a schematic diagram of the middle sagittal plane of an embodiment of the present application. In this embodiment, subject head 220 may include a plurality of sagittal planes, each of which is a plane that divides the body into left and right portions, wherein the plane that divides the body into left and right portions is the median sagittal plane. The plane S in fig. 4 represents a position located on the midsagittal plane of the subject' S head 220.
Referring also to FIG. 5, a cross-sectional view taken along the sagittal plane of plane S of FIG. 4 is shown. The mid-sagittal plane of the subject's head 220 contains important information about the fetus ' corpus callosum, lumbricus, and transparent compartment, and the tissue structures of the fetus, such as cisterna magna, thalamus junction, and fourth ventricle, can also be observed on the mid-sagittal plane of the subject's head. Therefore, the midsagittal section of the fetal head is detected and displayed, so that a great deal of important key information can be provided for a doctor or a doctor, and the doctor or the doctor can conveniently observe the fetal condition. Therefore, the midsagittal plane of the head of a subject needs to be determined when acquiring three-dimensional volume data of the head of the subject.
Step 202, determining a three-dimensional target contour of the subject's head from the three-dimensional volume data of the subject's head.
In this embodiment, the three-dimensional volume data obtained by the probe 100 after transmitting the ultrasonic waves to the subject may further include tissue structures of the head, the trunk, the hands, the feet, and the like of the subject. In order to obtain the midsagittal plane of the head of the subject, the three-dimensional target profile of the head of the subject can be determined according to the three-dimensional data of the head 220 of the subject, and then the midsagittal plane of the head of the subject can be detected according to the three-dimensional target profile.
Referring again to fig. 4, subject's head 220 may include skull 221, intracranial tissue structures 232 located within skull 221, eye orbit 230, ear 224, nose 228, mouth 226, cheek 234, and like tissue structures. In one embodiment, the three-dimensional target contour of subject head 220 may include an area corresponding to an exterior surface of head 220 of the fetus, such as an area including tissue structures of skull 221, eye socket 230, ear 224, nose 228, mouth 226, cheek 234, etc. of subject head 220. In another embodiment, the three-dimensional target contour of subject head 220 may comprise a region corresponding to intracranial tissue structure 232 of fetal head 220.
In one embodiment, the three-dimensional volume data corresponding to the head 220 of the subject may be represented as corresponding pixel points, and therefore, the three-dimensional volume data of the head 220 of the subject may be segmented according to an image segmentation algorithm to detect the three-dimensional target contour of the head 220 of the subject. It is to be understood that image segmentation algorithms include, but are not limited to, edge-based segmentation algorithms, region growing-based segmentation algorithms, model-based segmentation algorithms, and the like. The aforementioned various image segmentation algorithms can refer to common techniques, and are not described in detail.
In one embodiment, the three-dimensional volume data of the head 220 of the subject may be displayed on the display 112, the physician may mark a region of interest including a three-dimensional target contour in the frame of the head of the subject, and the image segmentation algorithm may segment the three-dimensional volume data of the head 220 of the subject based on pixels in the marked region of interest, thereby facilitating reduction of data size and improvement of processing speed and accuracy. For example, the user (or physician) may determine the region of interest by determining one or more of a target frame, point, or line drawn in a frame of the subject's head displayed on display 112 as the three-dimensional imaging module 110 receives the user's operation to draw the drawn target frame, point, or line.
Referring also to FIG. 6, a flowchart illustrating steps of step 202 of FIG. 2 according to one embodiment of the present invention is shown, including:
and 310, controlling the three-dimensional data of the head of the tested body to perform slicing operation, and generating a preset number of two-dimensional sections.
In this embodiment, a predetermined amount of two-dimensional slice data may be generated by slicing the head 220 of the subject in different directions or positions with respect to the head 220 of the subject, including but not limited to, slicing the head 220 of the subject in the sagittal and coronal directions or positions. Fig. 7 is a cross-sectional view of the two-dimensional section at the plane L in fig. 5. The two-dimensional slice obtained when the three-dimensional volume data of subject's head 220 is sliced may include tissue structures such as skull 221 and/or intracranial tissue structure 232 of subject's head 220.
In step 312, a two-dimensional target contour in each two-dimensional slice is determined.
In this embodiment, one or more two-dimensional target contours of the three-dimensional target contour may be segmented by using an image segmentation algorithm, wherein the image segmentation algorithm includes, but is not limited to, Snake, Graph Cut, LevelSet, RandomWalker algorithms.
For example, for the image segmentation algorithm of LevelSet, a high-dimensional level set function ψ (x, t) can be constructed to segment a two-dimensional object contour by surface evolution of the high-dimensional level set. Wherein, the two-dimensional target contour can be set to a zero level set of a high-dimensional level set function ψ (x, t), i.e. one or more two-dimensional target contours corresponding to the three-dimensional target contour in each two-dimensional section are calculated from the zero level set, and then the three-dimensional target contour can be fitted according to the determined two-dimensional target contour.
The mathematical expression for the surface evolution of the high-dimensional level set function ψ (x, t) is:
Figure BDA0001920477650000041
wherein the content of the first and second substances,
Figure BDA0001920477650000051
is a gradient operator, which is a linear operator,
Figure BDA0001920477650000052
d/dt ψ (x, t) denotes differentiating t of ψ (x, t), x being a position vector, t denotes time, D denotes a distance from x to the initial evolving surface, where D is a positive value when the position of the point x is outside the initial evolving surface, for the absolute value of the gradient operator; when the position of the point x is inside the initial evolving surface, D is a negative value and F denotes a velocity function of the surface evolution by the high-dimensional level set function ψ (x, t).
And under the driving of the speed function F, gradually propagating the high-dimensional level set function psi (x, t) to the boundary of the surface of the two-dimensional target contour, performing surface evolution, and stopping after the boundary is reached to complete the surface evolution. Then, calculating a zero level set of ψ (x, t) to obtain a two-dimensional target profile Γ (x, t) corresponding to a three-dimensional target profile in a two-dimensional section, wherein the mathematical expression of the velocity function F is:
F=αP(x)+βk(x)
wherein, p (x) is a propagation term, k (x) is a curvature term, α and β are weights of the propagation term and the curvature term, respectively, and the propagation term is proportional to the intensity of the feature image.
In the present embodiment, the feature image is obtained by mapping the grayscale image within the threshold region specified in the two-dimensional section into the range of 0 to 1, and the grayscale image outside the threshold region into the range of-1 to 0. The images within the threshold range are taken as foreground, and the rest are taken as background. If the pixel value of the pixel point in the two-dimensional section is within the threshold range, P (x) takes a positive value, otherwise, the value is a negative value. Since the propagation term p (x) enables expansion in the foreground, contraction at the background until convergence to the target image block boundary. The curvature term k is an average curvature calculation method to control the smoothness of the evolving surface. The evolving surface here refers to the zero level set embedded in the high-dimensional level set function, i.e. the two-dimensional object contour surface. Driven by the propagation term and the curvature term, the speed function drives the two-dimensional target contour surface to evolve at a constant speed in a region with uniform gray scale, and the speed is reduced near the boundary until the evolution stops at the boundary. After the evolution stops, the surface evolution of the contour representing the high dimensional level set reaches the target boundary of the embedded zero level set (evolution surface). Then, the evolving two-dimensional target contour surface can be obtained by calculating the zero level set, and the mathematical expression thereof is as follows:
Γ(x,t)={ψ(x,t)=0}
thus, from Γ (x, t), one or more target image blocks corresponding to the three-dimensional target contour in each two-dimensional slice may be obtained.
And step 314, fitting the three-dimensional target contour according to the two-dimensional target contour of each two-dimensional section.
In this embodiment, the pixel points included in each two-dimensional target contour block have corresponding three-dimensional coordinate values, so that the pixel points included in each two-dimensional target contour can be selected to fit the three-dimensional target contour, that is, the three-dimensional target contour of the head 220 of the tested body can be fitted.
In one embodiment, the three-dimensional target contour of subject's head 220 in step 202 may also be determined from a learning model.
Referring to fig. 8, a flowchart illustrating steps of a method for determining a three-dimensional object contour according to an embodiment of the present application is shown.
Step 320, determining a pixel point set composed of each pixel point in the three-dimensional volume data and other pixel points located in the neighborhood around the pixel point.
In this embodiment, the pixel point set may be represented as a set of pixel points within a preset step range centered on the target pixel point, that is, the target pixel point and other pixel points located in the neighborhood around the target pixel point form a pixel point set, where the pixel point set may be a two-dimensional pixel point set or a three-dimensional pixel point set.
For example, when the three-dimensional volume data is sliced, a plurality of two-dimensional slices can be obtained, and each two-dimensional slice comprises a plurality of pixel points. And after a target pixel point in the two-dimensional tangent plane is determined, taking the target pixel point as a center, and taking a set of pixel points in a preset step length range as a two-dimensional pixel point set. For example, when the pixel point of the preset step range is 1, the two-dimensional pixel point set may include a set of pixel points in a 3 × 3 planar area centered on the target pixel point; when the pixel point within the preset step range is 2, the two-dimensional pixel point set comprises a set of pixel points in a 5 × 5 plane area with the target pixel point as the center. In other embodiments, the values of the pixels within the preset step range may be other values, so as to perform corresponding setting according to specific situations.
For three-dimensional volume data, when the pixel point of the preset step range is 1, the three-dimensional pixel point set can comprise a set of pixel points in a 3 × 3 stereoscopic region with the target pixel point as the center; when the pixel point within the preset step range is 2, the three-dimensional pixel point set comprises a set of pixel points in a 5 × 5 three-dimensional area taking the target pixel point as the center. In other embodiments, the values of the pixels within the preset step range may be other values, so as to be determined according to specific situations.
Step 322, controlling each pixel point set to perform feature extraction, and obtaining a feature point corresponding to each pixel point set.
When a plurality of pixel point sets corresponding to the three-dimensional volume data are obtained, feature extraction can be performed on each pixel point set according to a feature extraction algorithm so as to obtain feature points corresponding to each pixel point set. In this embodiment, the feature extraction algorithm includes, but is not limited to, extraction algorithms such as PCA (Principal Component Analysis), LDA (linear discriminant Analysis), Harr feature, neural network, and texture feature. The three-dimensional target contour detection speed can be improved by performing feature extraction on the pixel point set.
Step 324, controlling the feature points contained in the current pixel point set to match with the standard feature points corresponding to the head of the tested body.
The ultrasound imaging system 10 may store standard feature points corresponding to a standard subject's head. In an embodiment, the standard subject head may include a plurality of pixels, and the ultrasound imaging system 10 may also segment the pixels of the standard subject head to obtain a plurality of standard pixel sets, and perform feature extraction on the standard pixel sets to obtain standard feature points corresponding to the standard pixel sets. Thus, a pixel point set can be obtained from the pixel point set corresponding to the three-dimensional volume data of the subject head 220 as a current pixel point set, and the feature points included in the current pixel point set are matched with the standard feature points corresponding to the standard subject head 220 to determine whether the current pixel point set belongs to the three-dimensional target contour.
Step 326, when the current pixel point set is matched with the standard feature point, determining that the pixel points included in the current pixel point set are the three-dimensional target contour.
In an embodiment, when the similarity between the feature point included in the current pixel point set and the standard feature point corresponding to the head of the standard subject exceeds a preset threshold, it is determined that the pixel point included in the current pixel point set is the three-dimensional target contour.
In one embodiment, the ultrasound imaging system 10 may store several samples of the subject's head, which are learned through learning models including, but not limited to, KNN (K-Nearest Neighbor), SVM (Support Vector machine), random forest, neural network, and other classifiers. Then, when it is determined that the pixel points included in the current pixel point set are the three-dimensional target contour, in order to improve the accuracy of detecting the three-dimensional target contour, the current pixel point set can be detected by the classifier pair to judge whether the current pixel point set belongs to the contour corresponding to the head of the detected body. If the current pixel point set belongs to the outline of the head of the tested body, the current pixel point set can be reserved; if the current pixel point set does not belong to the outline of the head of the detected body, the current pixel point set can be abandoned.
Referring to fig. 9, a flowchart illustrating steps of a method for determining a three-dimensional object contour according to an embodiment of the present application is shown.
Step 330, detecting the three-dimensional volume data according to a learning model to determine a region of interest in the three-dimensional volume data.
In this embodiment, feature learning may be performed on samples of the heads of several subjects based on deep learning, including but not limited to FCN (full Convolutional neural networks), U-Net, Mask R-CNN (Region-based Convolutional neural networks), so as to determine an area of interest in three-dimensional volume data after inputting the three-dimensional volume data of the subjects as test data into a learning model, where the area of interest may correspond to volume data corresponding to a three-dimensional target contour.
Step 332, determining the pixel points in the region of interest as the three-dimensional target contour.
And determining that the pixel points in the interested region have corresponding three-dimensional coordinate values, so that the pixel points in the interested region determined as the three-dimensional target contour can be the three-dimensional target contour.
And step 204, determining the median sagittal plane of the head of the tested body according to the three-dimensional target contour.
Since the midsagittal plane of the head of the subject is located on the determined three-dimensional target contour, the midsagittal plane of the head of the subject can be determined based on the characteristics of the midsagittal plane, the characteristics of the three-dimensional target contour, and the like.
Referring to fig. 10, a flowchart illustrating steps in one embodiment of step 204 in fig. 2 is shown, which includes the following steps:
and step 420, determining a transformation matrix between the three-dimensional target contour and the three-dimensional standard contour of the head of the tested body according to pixel point registration.
Because there is a certain correlation between the determined three-dimensional target contour and the three-dimensional standard contour of the head of the subject, for example, the determined three-dimensional target contour may be aligned with the standard contour of the head of the subject through rotation, translation, scaling, or the like, or the three-dimensional standard contour of the head of the subject may be aligned with the determined three-dimensional target contour through rotation, translation, scaling, or the like, so that the aligned three-dimensional target contour substantially matches with the three-dimensional standard contour in terms of spatial position and size. Therefore, the alignment between the determined three-dimensional target contour and the standard contour of the head of the tested body can be realized according to the registration of the pixel points.
For example, the pixel point contained in the determined three-dimensional target contour is represented as ptThe pixel point contained in the three-dimensional standard outline of the measured body is represented as psThe registration between the determined three-dimensional target profile and the three-dimensional profile of the subject may be expressed as:
pt=Tps
wherein T is a 4 x 4 transformation matrix, and the transformation matrix T contains rotation, translation, and scaling information.
In this embodiment, the pixel point registration algorithm includes, but is not limited to, a LORAX method, a 4-point method, a Super4 point, a DO (discriminant Optimization) method, and the like, and the registration algorithm may refer to the prior art and is not described herein again.
Step 422, a standard median sagittal plane in the three-dimensional standard contour is obtained.
In this embodiment, the three-dimensional standard contour is determined based on a standard head of the subject, and thus, a standard median sagittal plane located in the standard head of the subject can be determined manually or automatically.
And 424, performing inverse transformation operation on the standard median sagittal plane according to the transformation matrix to obtain a target position of the median sagittal plane in the three-dimensional target contour.
After the transformation matrix is obtained based on the pixel point registration method, inverse transformation operation can be carried out on the standard median sagittal plane. In this embodiment, the standard median sagittal plane includes a plurality of pixel points, and therefore, the pixel points on the standard median sagittal plane are subjected to inverse transformation operations according to the transformation matrix, including but not limited to transformation such as rotation, translation, and scaling, to obtain the standard median sagittal plane after inverse transformation, where the standard median sagittal plane after inverse transformation is the target position of the median sagittal plane in the three-dimensional target contour.
And 426, determining a pixel point positioned at the target position in the three-dimensional target contour as a median sagittal plane of the head of the measured body.
After the target position in the three-dimensional target contour is determined, the pixel points located at the target position in the three-dimensional target contour can be used as the midsagittal plane of the head 220 of the subject, and the pixel points included in the determined midsagittal plane can be displayed in the display 112 as images, so that a doctor can observe the fetal condition according to the picture of the midsagittal plane displayed in the display 112.
In one embodiment, when determining the pixel points located at the target position in the three-dimensional target contour, the three-dimensional imaging module 110 may perform interpolation processing on the pixel points located at the target position, wherein the interpolation processing includes, but is not limited to, nearest point interpolation, linear interpolation, and lanoces interpolation. In one embodiment, the target location corresponds to the midsagittal plane of the subject's head, and thus the pixels in the target location are in-plane pixels. When the interpolation processing is carried out on the mid-sagittal plane, the two-dimensional interpolation processing can be carried out on the pixel point in the plane where the target position is located. In an embodiment, three-dimensional interpolation processing can be performed on pixel points in a plane where the target position is located.
Referring to fig. 11, a flowchart illustrating steps in one embodiment of step 204 in fig. 2 is shown, which includes the following steps:
step 430, acquiring a standard tangent plane profile corresponding to a standard median sagittal plane in the three-dimensional standard profile corresponding to the head of the tested body.
In this embodiment, the mid-sagittal plane in the three-dimensional target contour of the subject's head may be detected based on characteristics of a standard mid-sagittal plane, which may be generated from mid-sagittal slices of other fetal heads that have been previously obtained. The number of standard median sagittal planes may be one or more, and each standard median sagittal plane may be of a different size to match the head of a subject of a different size.
Step 432, determining the target position corresponding to the standard tangent plane contour with the highest similarity value in the three-dimensional target contour.
In this embodiment, the three-dimensional target contour may be controlled to perform slicing processing, so as to generate a preset number of candidate tangent plane contours. All slices that are spaced apart by a certain interval (or step size) in one or more specific directions within a certain range in the three-dimensional target profile may be selected as candidate slice profiles. Here, the "fixed range" may be an angular range with respect to one or more lines and/or planes in the three-dimensional target profile, or may be a range of distances with respect to one or more points, lines and/or planes in the three-dimensional target profile; by "in one or more directions" is meant that the normal to the tangent plane is in the one or more directions; the interval or step size may be a distance interval or step size or an angle interval or step size.
In this embodiment, all the sections within the whole range of the three-dimensional target profile may be selected with a certain interval or step in one or more directions; alternatively, the candidate slice profiles may be selected based on some a priori knowledge, removing candidate slice profiles in which the mid-sagittal plane is significantly unlikely to be included. For example, since the median sagittal plane of the fetal head is a longitudinal plane located at the middle position of the fetal head (i.e., a plane in the direction from the parietal portion to the cervical portion), a group of longitudinal planes in the three-dimensional target contour may be selected as the aforementioned group of candidate plane contours, for example, a group of longitudinal planes substantially at the middle position of the head (e.g., all longitudinal planes at a specific step size or spacing within a specific region of the middle position of the head) may be selected as the group of candidate plane contours, depending on the approximate direction of the fetal image in the three-dimensional target contour. Alternatively, user input may be received indicating the possible range in which the midsagittal slice is located, and then selecting a slice within the range indicated by this user as the candidate slice profile. All sections in the three-dimensional target contour that are a certain step apart, i.e. all sections in the full range that match the three-dimensional volume data are traversed with a certain step, may also be selected.
After the slicing processing of the three-dimensional target contour is finished, the candidate tangent plane contour and the standard tangent plane contour can be controlled to be matched, the similarity between each candidate tangent plane contour and the standard tangent plane contour is determined, and the similarity value of each candidate tangent plane contour is obtained.
In this embodiment, the similarity value of the candidate tangent plane profile may be the sum of absolute values of differences between the gray values of the pixel points in the candidate tangent plane profile and the gray values of the pixel points in the standard tangent plane profile, such as:
Figure BDA0001920477650000081
wherein E is the similarity value, omega is the image space of the candidate section contour, ILIs the data value of a pixel point within the profile of the candidate tangent plane, IRAnd the data values of the pixel points in the standard tangent plane outline corresponding to the pixel points in the candidate tangent plane outline are obtained, wherein the pixel points in the standard tangent plane outline corresponding to the pixel points in the candidate tangent plane outline are represented as the pixel points in the candidate tangent plane outline and the standard tangent plane outline, which have the same position.
In one embodiment, the similarity value of the candidate slice profile may be a coefficient between the candidate slice profile and the standard slice profile, such as:
Figure BDA0001920477650000082
wherein E is the similarity value, omega is the image space of the candidate section contour, ILIs the data value of a pixel point within the profile of the candidate tangent plane, IRThe data values of the pixels in the standard tangent plane outline corresponding to the pixels in the candidate tangent plane outline are obtained.
Therefore, when determining the similarity value corresponding to each candidate tangent plane contour, the position of the candidate tangent plane contour with the highest similarity value in the preset number of candidate tangent plane contours can be selected as the target position.
In one embodiment, when the standard section contour includes a plurality of standard section contours, each candidate section contour may be matched with each standard section contour to determine a similarity value corresponding to each standard section contour. For example, the standard profile is denoted as { a }1,a2,a3…anAnd slicing the three-dimensional target contour to obtain a preset number of candidate tangent plane contours which can be expressed as { b }1,b2,b3…bmTherefore, the similarity value set corresponding to each standard slice profile for each candidate slice profile can be expressed as:
Figure BDA0001920477650000083
thus, a in the set of similarity values can be determined1b1To anbmThe candidate section contour corresponding to the maximum value in the three-dimensional target contour is the position of the median sagittal plane.
Step 434, determining a pixel point located on the target position in the three-dimensional target contour as a median sagittal plane of the head of the subject.
When the target position of the three-dimensional target contour is determined, the pixel point positioned on the target position is the median sagittal plane of the head of the measured body.
Referring to fig. 12, a flowchart illustrating steps in one embodiment of step 204 in fig. 2 is shown, which includes the following steps:
step 440, determining the positions of a preset number of sagittal planes in the three-dimensional target contour.
In this embodiment, the contour of the three-dimensional target contour on both sides of the midsagittal plane is symmetrical, as can be seen from the contour characteristics of the subject's head 220. Therefore, the three-dimensional target contour can be sliced in the direction of the sagittal plane, so as to obtain a preset number of sagittal planes, wherein the direction of the sagittal plane is to divide the three-dimensional target contour into a left part and a right part.
Please refer to FIG. 13, which is a schematic diagram showing the positions of a predetermined number of sagittal planes in an embodiment of the present application. When the head 220 of a subject is divided into two parts in the sagittal plane, two sagittal planes, i.e., a first sagittal plane S and a second sagittal plane T, may be included. In other embodiments, the number of sagittal planes for a slice of subject's head 220 in the sagittal plane direction is not limited to two, but may be other numbers, including but not limited to 3, 4, or 100.
Step 442, determining a first side contour and a second side contour of the three-dimensional target contour according to the position of the sagittal plane.
When the three-dimensional object contour is sliced in the sagittal plane direction, each sagittal plane can divide the three-dimensional object contour into a first lateral contour 240 located on the left side of the sagittal plane and a second lateral contour 242 located on the right side of the sagittal plane.
Step 444 of calculating a symmetry index of the first side contour or the second side contour with respect to the sagittal plane.
If the sagittal plane is the central sagittal plane, then the first lateral contour 240 and the second lateral contour 242 of the sagittal plane are symmetrical about the sagittal plane. Thus, an index of symmetry of the first side profile 240 or 242 and the second side profile with respect to the sagittal plane can be calculated to determine whether the sagittal plane is the median sagittal plane.
In this embodiment, the positions of the pixels on the first side contour 240 that are symmetric to the sagittal plane can be determined, and whether the second pixel on the second side contour 242 exists near the positions of the pixels can be determined. And when a second pixel point located in the second side contour 242 exists in the vicinity of the pixel point position, controlling the symmetry index of the sagittal plane to increase by a preset value. In this embodiment, the vicinity of the pixel point position may be represented as a preset number of pixel points centered on the pixel point position, such as preset 1, 2, 5, and the like.
For example, when the three-dimensional target contour corresponds to the outer surface region of the subject ' S head 220, for the sagittal plane S, if the sagittal plane S is the middle sagittal plane of the three-dimensional target contour, the pixel position of the first pixel point located at the point H on the first side contour 240 is set as H ' when the first pixel point is symmetric with respect to the sagittal plane S, and thus, it is necessary to determine whether the pixel position H ' has the second pixel point of the second side contour 242. As shown in FIG. 13, pixel location H' has a second pixel located in second side contour 242, and thus, it can be determined that a second pixel located in second side contour 242 exists near the location where the first pixel located in first side contour 240 is symmetric about sagittal plane S. In addition, when the first pixel point located at the point G on the first side contour 240 is symmetric about the sagittal plane S, the pixel point location is G ', so it needs to be determined whether the pixel point location G' has the second pixel point of the second side contour 242. As shown in FIG. 13, pixel location G' has a second pixel located in second side contour 242, and thus, it can be determined that a second pixel located in second side contour 242 exists near the location where the first pixel located in first side contour 240 is symmetric about sagittal plane S.
In this embodiment, it can also be determined whether there are second pixel points located on the second side contour 242 at symmetric positions of other first pixel points located on the first side contour 240 with respect to the sagittal plane S, and the number of first pixel points located on or near the second side contour 242 and symmetric with respect to the sagittal plane S on the first side contour 240 can be used as the symmetry index of the sagittal plane S. For example, the first pixel points at points G and H have the second pixel points located on the second side contour 242 and symmetric with respect to the sagittal plane S, so that the symmetry index of the sagittal plane S can be increased by 2.
For the sagittal plane T, the pixel position of the first pixel point located at the point G on the first side contour 240 is set to G "when it is symmetric about the sagittal plane T, and thus, it is necessary to determine whether there is a second pixel point of the second side contour 242 at or near the pixel position G". As shown in fig. 13, the pixel position G ″ or the vicinity thereof is not located at the second pixel having the second side contour 242, so that it can be determined that the first pixel located on the first side contour 240 is symmetric with respect to the sagittal plane T, and the second pixel located on the second side contour 242 is not located at the position thereof, so that the symmetry of other first pixels located on the first side contour 240 with respect to the sagittal plane T can be continuously determined.
Similarly, the symmetry index of the sagittal plane can also be determined according to whether the first pixel point of the first lateral contour 240 exists at the position of the second pixel point on the second lateral contour 242 symmetrical about the sagittal plane. The symmetry index for determining the sagittal plane is consistent with the above method, and therefore, is not described herein again.
In other embodiments, when determining the mid-sagittal plane of the three-dimensional target profile corresponding to the intracranial tissue structure 232 of the head of the subject, the symmetry index of each sagittal plane may also be determined according to the above-mentioned method, and thus, the description thereof is omitted.
In other embodiments, the symmetry index for each sagittal plane may be determined according to other methods. The application is not limited thereto.
Step 446, determining one of the predetermined number of sagittal planes as the median sagittal plane according to the symmetry index.
In this embodiment, the sagittal plane corresponding to the sagittal plane having the largest symmetry index among the preset number of sagittal planes may be determined to be the median sagittal plane. For example, the sagittal plane S in FIG. 13 has a greater symmetry index than the sagittal plane T, and thus, the sagittal plane S can be considered the mid-sagittal plane of the three-dimensional object contour.
The ultrasonic imaging method detects the three-dimensional target contour of the head of the detected body according to the three-dimensional volume data, and detects the median sagittal plane according to the characteristics of the three-dimensional target contour, so that a doctor can conveniently diagnose the fetus according to the median sagittal plane.
Referring to fig. 14, a flowchart illustrating steps of an ultrasound imaging method according to an embodiment of the present application is shown. The ultrasonic imaging method comprises the following steps:
step 500, three-dimensional volume data of a subject's head is acquired.
Step 500 in this embodiment is the same as step 200 in the above embodiment, and therefore, the description thereof is omitted.
Step 502 determines a three-dimensional target profile of the subject's head from the three-dimensional volume data of the subject's head.
Step 502 in this embodiment is the same as step 202 in the above embodiments, and therefore, the description thereof is omitted.
Step 504, determining a median sagittal plane of the subject's head from the three-dimensional target contour.
Step 504 in this embodiment is the same as step 204 in the above embodiments, and therefore, the description thereof is omitted.
And step 506, performing local correction operation on the median sagittal plane according to the characteristics of the median sagittal plane to obtain a target median sagittal plane after local correction.
Since the median sagittal plane in the three-dimensional object contour is obtained based on the spatial structure of the subject's head, the resulting median sagittal plane may be a rough location. Further, a partial correction operation may be performed on the mid-sagittal plane as described above to detect a more precise target mid-sagittal plane, which may be displayed on the display 112 for reference by the physician for diagnosis.
In one embodiment, the midsagittal plane includes the sickle structure, the sickle structure appears as a high echo, and echoes of other sections parallel to the sickle structure are lower than those of the midsagittal plane, and the heights of the echoes can be determined according to gray values of pixel points in the volume data, that is, different gray levels of the pixel points in the volume data represent the heights of the echoes. Therefore, after the target position of the median sagittal plane in the three-dimensional target contour is obtained, high-echo plane detection, such as plane detection of gray scale features, can be performed near the target position to obtain a more accurate target median sagittal plane, wherein the plane detection method includes but is not limited to Hough transformation method, random Hough transformation method, Randon transformation method, Randac algorithm, and the like.
In one embodiment, the mid-sagittal plane of interest can be determined based on symmetry of the mid-sagittal plane, since the sagittal planes on either side of the mid-sagittal plane are also symmetric.
Referring now to FIG. 15, a flowchart illustrating steps in one embodiment of step 506 of FIG. 14 is shown, including:
step 520, determining a first sagittal plane located at a predetermined step length in a first direction of the mid-sagittal plane.
Step 522, determine a second sagittal plane located in the second direction of the central sagittal plane for the predetermined step size, wherein the first direction is opposite to the second direction, and the first sagittal plane and the second sagittal plane are both parallel to the central sagittal plane.
Please refer to FIG. 16, which is a diagram illustrating the mid-sagittal plane detection in one embodiment of the present application.
In this embodiment, the median sagittal plane S of the three-dimensional target contour of the subject' S head 220 is determined at a predetermined step length (e.g., predetermined step length d) in a first direction (e.g., left side) of the median sagittal plane S, and the first sagittal plane S1 is determined at a predetermined step length (e.g., predetermined step length d) in a second direction (e.g., right side) of the median sagittal plane S, and the first sagittal plane S1 is determined. To determine symmetry in the median sagittal plane, the first sagittal plane S1 and the second sagittal plane S2 are both parallel to the median sagittal plane S, and are each a predetermined step length d from the median sagittal plane.
Step 524, calculating a symmetry index of the first sagittal plane or the second sagittal plane with respect to the median sagittal plane.
In this embodiment, the pixel location of the first pixel on the first sagittal plane S1 that is symmetric to the central sagittal plane S can be determined, and whether a second pixel on the second sagittal plane S2 is located in the vicinity of the pixel location can be determined. And when a second pixel point located on a second sagittal plane S2 exists in the vicinity of the pixel point position, controlling the symmetry index of the middle sagittal plane S to increase by a preset value. In this embodiment, the vicinity of the pixel point position may be represented as a preset number of pixel points centered on the pixel point position, such as preset 1, 2, 5, and the like.
For example, if the first pixel point located at the point K on the first sagittal plane S1 is symmetric with respect to the median sagittal plane S, the pixel point is set to K ', and thus it is necessary to determine whether the pixel point K' has the second pixel point of the second sagittal plane S2. As shown in fig. 16, a second pixel point of the second sagittal plane S2 exists at or near the pixel point position K', and therefore, it can be determined that a first pixel point located on the first sagittal plane S1 exists at the second pixel point of the second sagittal plane S2 at a position symmetrical with respect to the median sagittal plane S, and at this time, the symmetry index of the median sagittal plane S can be increased by a preset value. In this embodiment, it can also be determined whether there are second pixel points located on the second sagittal plane S2 at symmetrical positions of other first pixel points located on the first sagittal plane S1 with respect to the median sagittal plane S, and the number of first pixel points located on or near the second sagittal plane S2 and symmetrical with respect to the median sagittal plane S in the first sagittal plane S1 can be taken as the symmetry index of the median sagittal plane S. For example, the first pixel at point K has the second pixel symmetric about the sagittal plane S and located on the second side contour 242, so that the symmetry index of the median sagittal plane S can be increased by 1. If the first pixel on the first sagittal plane S1 does not exist symmetrically about the median sagittal plane S and is located on the second sagittal plane S2, the determination of the other first pixels can be continued.
Similarly, the symmetry index of the sagittal plane can also be determined according to whether the first pixel point of the first sagittal plane S2 exists at the position where the second pixel point on the second sagittal plane S2 is symmetrical about the median sagittal plane S. The symmetry index for determining the sagittal plane is consistent with the above method, and therefore, is not described herein again.
In other embodiments, the symmetry index for each sagittal plane may be determined according to other methods. The application is not limited thereto.
Step 526, determining the target median sagittal plane from the symmetry index.
In this embodiment, since values of the preset step lengths are different, the symmetry indexes of the median sagittal plane S obtained by calculation may also be different. Therefore, each preset step length can be provided with a corresponding preset range, so that the symmetry index of the target median sagittal plane relative to the preset step length also has the preset range. And then whether the symmetry index of the median sagittal plane is positioned in a preset range corresponding to the preset step length or not can be judged to judge whether the median sagittal plane needs to be corrected or not, or the median sagittal plane is determined to be the target median sagittal plane.
For example, when the symmetry index of the midsagittal plane S is not within the preset range corresponding to the preset step length, the accuracy of the midsagittal plane S is to be improved, so that the position of the midsagittal plane can be selected again or corrected to obtain a more accurate midsagittal plane.
And when the symmetry index of the middle sagittal plane S is positioned in the preset range corresponding to the preset step length, determining that the middle sagittal plane S is the target middle sagittal plane.
The ultrasonic imaging method obtains a target median sagittal plane with higher accuracy by partially correcting the obtained median sagittal plane, and after the target median sagittal plane is determined, the target median sagittal plane can be displayed in the display 112 for reference of a doctor for diagnosing a tested body.
Referring to fig. 17, a flowchart illustrating steps of an ultrasound imaging method according to an embodiment of the present application is shown. The ultrasonic imaging method comprises the following steps:
step 600, three-dimensional volume data of the head of the measured body is obtained.
Step 600 in this embodiment is the same as step 200 in the above embodiments, and therefore, is not described again.
At step 602, the position of at least one pair of symmetrical anatomical structures in the head of the subject is determined from the three-dimensional volume data of the subject's head.
Referring to fig. 4, orbital 230, ear 224, nose 228, mouth 226, cheek 234, intracranial lateral ventricle, cerebellar hemisphere, etc. of subject's head 220 are all symmetric about the midsagittal plane. Thus, the mid-sagittal plane of subject's head 220 may be determined from the locations of at least one pair of corresponding structures.
In this embodiment, a number of identified images may be stored within the ultrasound imaging system 10, and a learning model may be constructed by feature learning the number of identified images. The image identifier may be an interested region frame containing the target, a Mask (Mask) for accurately segmenting the target, and a category of the tissue mechanism corresponding to each interested region frame or Mask.
In this way, when detecting the organization mechanism included in the three-dimensional volume data of the head 220 of the subject, the three-dimensional volume data can be input into the constructed learning model as test data, the learning model identifies the region of interest in the three-dimensional volume data, and the identified region of interest can be classified to determine the category of the organization mechanism corresponding to the region of interest.
Referring now to FIG. 18, a flowchart illustrating steps of one embodiment of step 602 of FIG. 17 is shown, including:
and step 620, obtaining pixel points of the three-dimensional volume data in the sliding window area.
And 622, controlling the pixel points in the sliding window area to perform feature extraction.
In this embodiment, the three-dimensional volume data includes a plurality of pixel points, and the three-dimensional volume data may be sliced according to a preset direction to generate a plurality of two-dimensional slices, and feature extraction may be performed on each two-dimensional slice through a preset sliding window. When the preset sliding window is located at different positions, the feature points at different positions can be extracted. In other embodiments, the predetermined sliding window may also be three-dimensional, so that the three-dimensional volume data in the three-dimensional predetermined sliding window may be directly selected for feature extraction. In this embodiment, the feature extraction algorithm includes, but is not limited to, extraction algorithms such as PCA (Principal Component Analysis), LDA (linear discriminant Analysis), Harr feature, texture feature, and neural network. The speed of tissue structure detection can be improved by performing feature extraction through a preset sliding window.
Step 624, controlling the feature points extracted from the sliding window region to match with the feature points of the reference tissue structure, and determining whether the pixel points in the sliding window region include an interested region, wherein the interested region includes at least one pair of symmetric structures.
The ultrasound imaging system 10 may store characteristic points of a reference tissue structure (including, but not limited to, reference tissue structures of the orbit, ear, nose, mouth, cheek, lateral ventricle of the cranium, cerebellar hemisphere, etc.). In this way, when the corresponding feature point is extracted from the three-dimensional volume data, it can be matched with the feature point of the reference tissue structure to determine whether the current sliding window region includes the region of interest.
Step 626, when the pixel points in the sliding window area contain the region of interest, determining the position of the region of interest.
In an embodiment, when the similarity between the feature point included in the current sliding window region and the feature point of the reference organization structure exceeds a preset threshold, it may be determined that the region of interest is included in the current sliding window region, and since the pixel points included in the current sliding window region all have corresponding three-dimensional coordinate values, the position of the region of interest may be obtained when it is determined that the region of interest is included in the current sliding window region. The location of at least one pair of symmetric tissue structures is obtained when determining the region of interest within each two-dimensional slice.
In an embodiment, since the identifier of the image further includes a category of an organization mechanism corresponding to each region of interest frame or Mask, after the region of interest is determined, a category of an organization structure corresponding to the region of interest may also be determined through a learning model, where the learning model includes, but is not limited to, classifiers such as KNN (K-Nearest Neighbor algorithm), SVM (Support Vector machine), random forest, neural network, and the like. After determining the tissue structure classes within the region of interest, the location of each pair of symmetric tissue structures may be determined, including but not limited to the locations of the symmetric tissue structures of the orbit 230, ear 224, nose 228, mouth 226, cheek 234, intracranial lateral ventricle, cerebellar hemisphere, and the like.
In one embodiment, the location of at least one pair of symmetric tissue structures in the three-dimensional volumetric data may be determined directly from the learning model.
Referring now to FIG. 19, a flowchart illustrating steps of one embodiment of step 602 of FIG. 17 is shown, including:
step 630, obtaining a region of interest in the three-dimensional volume data according to the first learning model.
In the embodiment, feature learning may be performed on samples of the heads of several subjects based on deep learning, including but not limited to learning models such as R-CNN, Fast-RCNN, SSD, YOLO, etc., so as to determine a region of interest in three-dimensional volume data after inputting the three-dimensional volume data of the subject as test data into the learning models, wherein the region of interest may correspond to volume data corresponding to a three-dimensional target contour.
Step 632, determining a prediction region corresponding to the region of interest.
When determining the region of interest, since the region of interest has low accuracy for detecting the tissue mechanism, a regression process (e.g., bounding box regression) may be performed on the region of interest to obtain a corresponding predicted region.
Step 634, the location of the prediction region is determined.
After the prediction region is determined, the prediction region can be identified by classification of the full connection layer, and the category of the organization structure corresponding to the prediction region is obtained. When the type of the tissue structure corresponding to the prediction region is a symmetric tissue structure, the position of the prediction region can also be determined. The tissue structure of the predicted region includes, but is not limited to, symmetric tissue structures such as orbit, ear, nose, mouth, cheek, intracranial lateral ventricle, cerebellar hemisphere, etc.
Referring now to FIG. 20, a flowchart illustrating steps of one embodiment of step 602 of FIG. 17 is shown, including:
and step 640, performing detection processing on the three-dimensional volume data according to a second learning model to determine a region of interest in the three-dimensional volume data.
In this embodiment, feature learning may be performed on a plurality of samples of the head of the subject based on deep learning, including but not limited to learning models such as FCN, U-Net, Mask R-CNN, etc., so as to determine an area of interest in three-dimensional volume data after inputting the three-dimensional volume data of the subject as test data into the learning models, where the area of interest may correspond to volume data corresponding to a three-dimensional target contour.
And 642, determining the position corresponding to the region of interest.
After the region of interest is determined, an upsampling operation may be performed on the region of interest to directly determine the location and type of the region of interest. The tissue structure of the region of interest includes, but is not limited to, symmetric tissue structures such as orbit, ear, nose, mouth, cheek, intracranial lateral ventricle, cerebellar hemisphere, etc.
At step 604, the midsagittal plane of the subject's head is determined based on the locations of the at least one pair of symmetric tissue structures.
Referring to fig. 21, a flowchart illustrating steps in one embodiment of step 604 in fig. 17 includes:
step 650, determining a first alignment position of a first tissue structure of the at least one pair of symmetric tissue structures.
At step 652, a second alignment position of a second tissue structure of the at least one pair of symmetric tissue structures is determined.
In this embodiment, the pair of symmetric organizations includes a first organization and a second organization, wherein, when determining the position of at least one pair of symmetric organizations, a first alignment position of the first organization and a second alignment position of the second organization in each symmetric organization can be determined.
Please refer to fig. 22, which is a schematic diagram illustrating an alignment position in an embodiment of the present application.
In one embodiment, the comparison position in the symmetric weave structure is the central position in each weave structure, for example, the comparison position of an ear is the central position of an ear, and the two ears 224 respectively include a first comparison position a in the first weave structure and a second comparison position B in the second weave structure.
In one embodiment, the alignment position in the symmetric organization structure can be other non-central positions, such as the alignment position of the mouth 226 at the corners of the mouth on both sides of the mouth, including the first alignment position D and the second alignment position E; the alignment position of the orbit 230 may be the corner of the eye near the two sides of the nose, such as the first alignment position I and the second alignment position J. In other embodiments, the comparison position in the symmetric organization structure may be located at the centroid of the first organization structure and the second organization structure, wherein the centroid in the organization structure may be calculated according to a centroid algorithm, and thus, the description is omitted.
Step 654, determining a central point of a position connecting line between the first comparison position and the second comparison position.
Please refer to fig. 23, which is a schematic diagram illustrating a center point of the position connecting line in fig. 22. Wherein, the central point of the position connecting line AB of the first comparison position a and the second comparison position B of the symmetric tissue ear 224 is C, and the central point of the position connecting line DE of the first comparison position D and the second comparison position E of the symmetric tissue corner 226 is F.
Step 656, determining a plane perpendicular to the position connecting line and passing through the central point as the median sagittal plane.
In this embodiment, since head 220 of the subject is facing the user as shown in fig. 22, a plane perpendicular to position line AB and passing through center point C and a plane perpendicular to position line DE and passing through center point F can both be plane S, and thus, it can be determined that plane S is the midsagittal plane.
In one embodiment, there may be two or more pairs of symmetric anatomical structures, and thus, when determining the midsagittal plane of the subject's head 220 based on the locations of the two or more pairs of symmetric anatomical structures, the locations of the two or more pairs of symmetric anatomical structures may be combined to determine the midsagittal plane.
Referring now to FIG. 24, a flowchart illustrating steps in one embodiment of step 604 of FIG. 17 is shown, including:
step 660, the midsagittal plane of each pair of symmetric tissue structures is obtained.
Referring to FIG. 25, a schematic view of a symmetric tissue structure in the sagittal plane is shown, in one embodiment of the present application. Fig. 25 includes: the symmetric tissue ear 224 corresponds to a median sagittal plane Y perpendicular to the location line AB and passing through the center point C, and the symmetric tissue mouth angle 226 corresponds to a median sagittal plane X perpendicular to the location line DE and passing through the center point F.
Step 662, determining the mean plane of the median sagittal plane corresponding to the two or more pairs of symmetric tissue structures as the median sagittal plane of the head of the subject.
In this embodiment, when there are two pairs of symmetric tissue structures in the median sagittal plane, determining the average plane of the median sagittal planes corresponding to the two pairs of symmetric tissue structures may be a bisecting plane located between the median sagittal planes corresponding to the two pairs of symmetric tissue structures. Wherein, the bisection plane is a plane with equal distance between the two planes.
For example, for the midsagittal plane X, the mathematical equation for the plane in which it lies can be expressed as:
A1X+B1Y+C1Z+D1=0
wherein A is1、B1、C1And D1Are all constants.
For the midsagittal plane Y, the mathematical equation for the plane in which it lies can be expressed as:
A2X+B2Y+C2Z+D2=0
wherein A is2、B2、C2And D2Are all constants.
The mean planes corresponding to the median sagittal plane X and the median sagittal plane Y can be expressed as:
A3X+B3Y+C3Z+D3=0
wherein A is3、B3、C3And D3Are all constants.
Assuming a point P (X, Y, z) lying on the mean plane, the distance from point P to the median sagittal plane X is equal to the distance from point P to the median sagittal plane Y, according to the geometric characteristics of the mean plane, and therefore:
Figure BDA0001920477650000141
therefore, the bisecting plane S of the median sagittal plane corresponding to the two pairs of symmetric tissue structures can be determined according to the above formula, and the bisecting plane S at this time is the median sagittal plane.
In one embodiment, when there are three or more pairs of the median sagittal planes of the symmetric tissue structures, a bisecting plane between the median sagittal planes corresponding to the two pairs of the symmetric tissue structures may be calculated, and then a bisecting plane between the resulting bisecting plane and the median sagittal plane of another pair of the symmetric tissue structures may be calculated. When calculating the bisection plane of the median sagittal plane of the last pair of symmetrical structures, the average plane of three or more pairs of symmetrical tissue structures is obtained.
Referring now to FIG. 26, a flowchart illustrating steps in one embodiment of step 604 of FIG. 17 is shown, including:
step 670, a center point of a connecting line between the first tissue structure and the second tissue structure in each pair of symmetric tissue structures is obtained.
Please refer to FIG. 27, which is a schematic diagram of a sagittal view of a symmetric tissue structure according to an embodiment of the present application. Fig. 27 includes: a center point C of a line AB connecting positions of the first tissue structure and the second tissue structure in the symmetric tissue ear 224, and a center point F of a line DE connecting positions of the first tissue structure and the second tissue structure in the symmetric tissue corner 226.
And 672, controlling the central point of each pair of symmetrical tissue structures to be subjected to fitting operation to obtain a fitting plane.
In this embodiment, the fitting operation may be performed on each central point according to a least square method to obtain a fitted plane S.
For example, for a spatial plane, its mathematical equation can be expressed as:
AX+BY+CZ+1=0
when there are n points, to fit to this plane S, it can be represented in the form of a matrix:
Figure BDA0001920477650000142
transforming the above equation yields:
Figure BDA0001920477650000143
simplifying to obtain:
Figure BDA0001920477650000144
wherein i is an integer of 1 to n.
Thus, the coefficients A, B, C of the fitted plane S equation are:
Figure BDA0001920477650000151
step 674, determining the position of the fitting plane as the median sagittal plane of the head of the measured body.
The fitting plane S obtained above is the median sagittal plane of the head of the tested body.
Referring now to FIG. 28, a flowchart illustrating steps in one embodiment of step 604 of FIG. 17 is shown, including:
at step 680, a first average position of the first alignment positions of the first tissue structure on the first side in each pair of symmetric tissue structures is determined.
Step 682, determine a second average position of the second alignment positions of the second tissue structures located on the second side in each pair of symmetric tissue structures.
In step 684, an average center point of a location connection between the first average location and the second average location is determined.
Step 686, determining a plane perpendicular to the position connection line and passing through the mean center point as a median sagittal plane of the head of the subject.
Please refer to FIG. 29, which is a schematic diagram of the symmetric tissue structure in the sagittal view in the front view according to one embodiment of the present application. Fig. 29 includes: a center point C of a line AB connecting positions of the first tissue structure and the second tissue structure in the symmetric tissue ear 224, and a center point F of a line DE connecting positions of the first tissue structure and the second tissue structure in the symmetric tissue corner 226. In other embodiments, the number of symmetric organizational structures is not limited to 2 pairs.
In this embodiment, each pair of symmetric tissue structures includes a first alignment position on a first side (e.g., left side), for example, the first alignment position of the first tissue structure in the symmetric tissue ear 224 is a, and the first alignment position of the first tissue structure in the symmetric tissue mouth corner 226 is D, so that the first alignment position a of the first tissue structure in the symmetric tissue ear 224 on the first side and the first alignment position D of the first tissue structure in the symmetric tissue mouth corner 226 are determined to be W. If the coordinate of the first comparison position A is (x)1,y1,z1) The coordinate of the first comparison position D is (x)2,y2,z2) Then, the coordinates of the first average position W may be expressed as:
Figure BDA0001920477650000152
similarly, the second alignment position of the second tissue structure in the ear 224 with symmetric structure is B, and the second alignment position of the second tissue structure in the mouth corner 226 with symmetric structure is E, so that the second average position of the second alignment position B of the second tissue structure in the ear 224 with symmetric structure in the second side and the second alignment position E of the second tissue structure in the mouth corner 226 with symmetric structure is U. If the coordinate of the second comparison position B is (x)3,y3,z3) The coordinate of the second alignment position E is (x)4,y4,z4) Then, the coordinates of the second average position U may be expressed as:
Figure BDA0001920477650000153
thus, when determining the first and second average positions W and U, the mean center point V of the position line between the first and second average positions W and U can be determined, and the plane perpendicular to the position line WU and passing through the mean center point V can be taken as the midsagittal plane of the head 220 of the subject.
In one embodiment, because the image features of different symmetric tissue structures are different, the reliability of detecting tissue structures may also be different, for example, the image features of the orbital image are more prominent, and thus, the average position of each side may be calculated according to the weights set by the class of symmetric tissue structures.
For example, the symmetric tissue ear 224 may be weighted α and the symmetric tissue corner 226 may be weighted β, as can be seen from the above equation:
the coordinates of the first average position W may be expressed as:
Figure BDA0001920477650000154
the coordinates of the second average position U may be expressed as:
Figure BDA0001920477650000161
thus, the determination of the midsagittal plane of subject's head 220 may be based on the weight values and corresponding average locations of the classes of symmetric tissue structures.
The ultrasonic imaging method detects at least one pair of symmetrical tissue structures of the head of the detected body according to the three-dimensional volume data, and detects the median sagittal plane according to the at least one pair of symmetrical tissue structures, so that a doctor can conveniently diagnose the fetus according to the median sagittal plane.
Referring to fig. 30, a flowchart illustrating steps of an ultrasound imaging method according to an embodiment of the present application is shown. The ultrasonic imaging method comprises the following steps:
step 700, three-dimensional volume data of a subject's head is acquired.
Step 700 in this embodiment is the same as step 600 in the above embodiment, and therefore is not described again.
At step 702, the position of at least one pair of symmetrical anatomical structures in the subject's head is determined from the three-dimensional volume data of the subject's head.
Step 702 in this embodiment is the same as step 602 in the above embodiments, and therefore, is not described again.
Step 704, determining a median sagittal plane of the subject's head based on the locations of the at least one pair of symmetric tissue structures.
Step 704 in this embodiment is the same as step 604 in the above embodiments, and therefore, the description thereof is omitted.
And step 706, performing local correction operation on the median sagittal plane according to the characteristics of the median sagittal plane to obtain a target median sagittal plane after local correction.
Step 706 in this embodiment is the same as step 506 in the above embodiments, and therefore, is not described again.
The ultrasonic imaging method obtains a target median sagittal plane with higher accuracy by partially correcting the obtained median sagittal plane, and after the target median sagittal plane is determined, the target median sagittal plane can be displayed in the display 112 for reference of a doctor for diagnosing a tested body.
Referring to fig. 31, a block diagram of an ultrasound imaging system 80 in one embodiment of the present application is shown. As shown in fig. 31, the ultrasound imaging system 80 may apply the above embodiments, and the ultrasound imaging system 80 provided in the present application is described below, the ultrasound imaging system 80 may include a processor 800, a storage device 802, a probe 100, a control circuit 804, a display 112, and a computer program (instructions) stored in the storage device 802 and executable on the processor 800, and the ultrasound imaging system 80 may further include other hardware components, such as a communication device, a key, a keyboard, and the like, which are not described herein again. The processor 800 may exchange data with the probe 100, control circuitry 904, memory device 802, and display 112 via signal lines 806.
The Processor 800 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is the control center for the ultrasound imaging system 80 and that connects the various components of the overall ultrasound imaging system 80 using various interfaces and lines. In this embodiment, the processor 800 may be configured to implement all functions of the three-dimensional image processing module 110, or may be integrated with the functions of the beam forming module 106 and the signal processing module 108, which may refer to the foregoing embodiments.
The control circuit 804 may include the functions of the transmitting circuit 102, the receiving circuit 104, the beam forming module 106 and/or the signal processing module 108 in the above embodiments, which may be referred to in detail.
The storage device 802 may be used to store the computer programs and/or modules, and the processor 800 may implement the various functions of the ultrasound imaging method by running or executing the computer programs and/or modules stored in the storage device 802 and invoking the data stored in the storage device 802. The storage device 802 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like. In addition, the storage device 802 may include a high-speed random access memory device, and may also include a non-volatile storage device such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one piece of magnetic disk storage, a Flash memory device, or other volatile solid state storage.
In this embodiment, the storage device 802 stores three-dimensional volume data of the head of the subject, and the processor 800 performs three-dimensional target contour detection or detection of at least one pair of symmetric tissue structures of the head of the subject. In this embodiment, the storage device 802 may further store a plurality of identified images, so that the processor 800 constructs a learning model by performing feature learning on the plurality of identified images, where the identification of the image may be a region of interest box containing an object, may also be a Mask (Mask) for accurately segmenting the object, and may also specify a category of an organization corresponding to each region of interest box or Mask.
The display 112 may display a User Interface (UI), a Graphical User Interface (GUI), a picture corresponding to three-dimensional volume data of a head of a subject, or a midsagittal plane, the ultrasound imaging system 804 may also serve as an input device and an output device, and the display 112 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) touch display, a flexible touch display, a three-dimensional (3D) touch display, and the like.
The processor 800 runs a program corresponding to the executable program code by reading the executable program code stored in the storage device 802 for performing the ultrasonic sound imaging method in any of the previous embodiments.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (45)

1. An ultrasound imaging method, characterized in that it comprises:
acquiring three-dimensional volume data of a head of a tested object;
determining a three-dimensional target profile of the subject's head from the three-dimensional volume data of the subject's head;
determining a median sagittal plane of the subject's head from the three dimensional target profile.
2. A method of ultrasound imaging according to claim 1, wherein said determining a three-dimensional target profile of the head of the subject from three-dimensional volume data of the head of the subject comprises:
and detecting the three-dimensional data of the head of the measured body according to an image segmentation algorithm, and determining the three-dimensional target contour of the head of the measured body.
3. A method of ultrasound imaging according to claim 1, wherein said determining a three-dimensional target profile of the head of the subject from three-dimensional volume data of the head of the subject comprises:
controlling the three-dimensional volume data of the head of the tested body to perform slicing operation to generate a first preset number of two-dimensional sections;
determining a two-dimensional target contour in each two-dimensional section;
and fitting the three-dimensional target contour according to the two-dimensional target contour of each two-dimensional section.
4. A method of ultrasound imaging according to claim 1, wherein said determining a three-dimensional target profile of the head of the subject from three-dimensional volume data of the head of the subject comprises:
and determining the three-dimensional target contour of the head of the tested body corresponding to the three-dimensional volume data of the head of the tested body according to the learning model.
5. The ultrasound imaging method according to claim 4, wherein said determining a three-dimensional target contour of the subject's head corresponding to the three-dimensional volume data of the subject's head from the learning model comprises:
determining a pixel point set consisting of each pixel point in the three-dimensional volume data and other pixel points located in the surrounding neighborhood of the pixel point according to a first learning model;
controlling each pixel point set to perform feature extraction, and obtaining feature points corresponding to each pixel point set;
controlling the feature points contained in the current pixel point set to be matched with the standard feature points corresponding to the head of the tested body;
and when the current pixel point set is matched with the standard characteristic point, determining the pixel points contained in the current pixel point set as the three-dimensional target contour.
6. The ultrasound imaging method according to claim 4, wherein said determining a three-dimensional target contour of the subject's head corresponding to the three-dimensional volume data of the subject's head from the learning model comprises:
detecting and processing the three-dimensional data according to a second learning model to determine a region of interest in the three-dimensional data;
and determining pixel points in the interested region as the three-dimensional target contour.
7. An ultrasound imaging method according to any one of claims 1 to 6, wherein said determining a three-dimensional target profile of the subject's head from three-dimensional volume data of the subject's head comprises:
controlling the three-dimensional volume data of the head of the tested body to be displayed on a display;
responding to the input operation of a user to calibrate the region of interest;
determining a three-dimensional target contour of the subject's head from the region of interest.
8. The ultrasound imaging method of claim 7, wherein said operating to calibrate a region of interest in response to a user input comprises:
receiving the operation of a target frame drawn by a user, and determining the region of interest according to the drawn target frame; or
And receiving the operation of points or lines drawn by a user, and determining the region of interest according to the drawn points or lines.
9. The method of ultrasound imaging according to claim 1, wherein said volume data comprises a plurality of pixel points, said determining a median sagittal plane of the subject's head from said three dimensional target contour comprising:
determining a transformation matrix between the three-dimensional target contour and the three-dimensional standard contour of the head of the tested body according to pixel point registration;
acquiring a standard median sagittal plane in the three-dimensional standard contour;
performing inverse transformation operation on the standard median sagittal plane according to the transformation matrix to obtain a target position of the median sagittal plane in the three-dimensional target contour;
and determining a pixel point positioned at the target position in the three-dimensional target contour as a median sagittal plane of the head of the measured body.
10. A method of ultrasound imaging according to claim 1, wherein said determining a median sagittal plane of the subject's head from said three dimensional target profile comprises:
acquiring a standard tangent plane contour corresponding to a median sagittal plane in a three-dimensional standard contour corresponding to the head of the tested body;
determining a target position corresponding to the standard tangent plane contour with the highest similarity value in the three-dimensional target contour;
and determining a pixel point positioned on the target position in the three-dimensional target contour as a median sagittal plane of the head of the measured body.
11. The method of claim 10, wherein said determining the target location of the three-dimensional target contour corresponding to the highest similarity value to the standard slice contour comprises:
controlling the three-dimensional target contour to carry out slicing processing to generate a second preset number of candidate tangent plane contours;
controlling the candidate tangent plane contour to be matched with the standard tangent plane contour, and determining the similarity between each candidate tangent plane contour and the standard tangent plane contour to obtain the similarity value of each candidate tangent plane contour;
and determining the position of the candidate tangent plane contour with the highest similarity value in the second preset number of candidate tangent plane contours as the target position.
12. A method of ultrasound imaging according to claim 1, wherein said determining a median sagittal plane of the subject's head from said three dimensional target profile comprises:
determining the positions of a third preset number of sagittal planes in the three-dimensional target contour;
determining a first side contour and a second side contour in the three-dimensional target contour according to the position of the sagittal plane;
calculating a symmetry index of the first or second lateral contour relative to the sagittal plane;
determining one of the third preset number of sagittal planes as the mid-sagittal plane according to the symmetry index.
13. The method of ultrasound imaging according to claim 12, wherein said three-dimensional object contour comprises a number of pixel points, said calculating an index of symmetry of said first side contour or said second side contour with respect to said sagittal plane comprises:
determining a pixel point location of a first pixel point located on the first side contour that is symmetric to the sagittal plane;
judging whether a second pixel point located in the second side outline exists in the vicinity of the position of the pixel point;
and when a second pixel point located in the second side contour exists near the position of the pixel point, controlling the symmetry index of the sagittal plane to increase a preset value.
14. The method of ultrasound imaging according to claim 13, wherein said determining one of said third preset number of sagittal planes as said median sagittal plane from said symmetry index comprises:
determining the sagittal plane corresponding to the maximum symmetry index in the third preset number of sagittal planes as the median sagittal plane.
15. A method of ultrasound imaging according to claim 1, wherein the three-dimensional target profile comprises a region corresponding to an outer surface of a head of a fetus; or
The three-dimensional target profile includes a region corresponding to intracranial tissue structures of the head of the fetus.
16. The method of ultrasound imaging according to claim 1, wherein the midsagittal plane of the subject's head includes a number of pixel points, and wherein said determining the midsagittal plane of the subject's head from the three-dimensional object profile comprises:
and controlling pixel points positioned at the position of the median sagittal plane to perform image interpolation operation, and generating the median sagittal plane after interpolation.
17. The ultrasound imaging method of claim 1, further comprising:
controlling the midsagittal plane to be displayed within a display.
18. The ultrasound imaging method of claim 1, further comprising:
and performing local correction operation on the median sagittal plane according to the characteristics of the median sagittal plane to obtain a target median sagittal plane after local correction.
19. The method of ultrasound imaging according to claim 18, wherein said performing a local correction operation on said median sagittal plane according to characteristics of said median sagittal plane, resulting in a locally corrected target median sagittal plane, comprises:
and controlling the mid-sagittal plane to carry out high-echo plane detection to obtain a locally corrected target mid-sagittal plane.
20. The method of ultrasound imaging according to claim 18, wherein said performing a local correction operation on said median sagittal plane according to characteristics of said median sagittal plane, resulting in a locally corrected target median sagittal plane, comprises:
determining a first sagittal plane with a preset step length in the first direction of the positive sagittal plane;
determining a second sagittal plane located in a second direction of the median sagittal plane for the preset step size, wherein the first direction is opposite the second direction and the first and second sagittal planes are both parallel to the median sagittal plane;
calculating a symmetry index of the first sagittal plane or the second sagittal plane with respect to the median sagittal plane;
determining the target median sagittal plane from the symmetry index.
21. The method of ultrasound imaging according to claim 20, wherein the first sagittal plane and the second sagittal plane include pixel points, said calculating a symmetry index of the first sagittal plane or the second sagittal plane with respect to the median sagittal plane comprising:
determining a pixel point location of a first pixel point located on the first sagittal plane that is symmetric to the median sagittal plane;
judging whether a second pixel point located in the second sagittal plane exists in the vicinity of the pixel point position;
and when a second pixel point located on the second sagittal plane exists near the pixel point position, controlling the symmetry index of the median sagittal plane to increase a preset value.
22. A method of ultrasound imaging according to claim 21, wherein the symmetry index of the mid-sagittal plane of the object with respect to the preset step size has a preset range; said determining the target mid-sagittal plane from the symmetry index comprises:
judging whether the symmetry index of the midsagittal plane is located in a preset range corresponding to the preset step length;
when the symmetry index of the median sagittal plane is not positioned in the preset range corresponding to the preset step length, correcting the position of the median sagittal plane;
and when the symmetry index of the median sagittal plane is positioned in a preset range corresponding to the preset step length, determining the median sagittal plane as the target median sagittal plane.
23. An ultrasound imaging method, characterized in that it comprises:
acquiring three-dimensional volume data of a head of a tested object;
determining the position of at least one pair of symmetrical anatomical structures in the head of the subject from the three-dimensional volume data of the head of the subject;
determining a median sagittal plane of the subject's head from the locations of the at least one pair of symmetric tissue structures.
24. The method of ultrasound imaging according to claim 23, wherein said three-dimensional volume data comprises a plurality of pixel points, and said determining the location of at least one pair of symmetric tissue structures in the head of said subject from said three-dimensional volume data of the head of said subject comprises:
acquiring pixel points of the three-dimensional volume data in the sliding window area;
controlling pixel points in the sliding window area to perform feature extraction;
controlling the feature points extracted from the sliding window area to be matched with the feature points of the reference tissue structure, and determining whether pixel points in the sliding window area contain an interested area, wherein the interested area comprises at least one pair of symmetrical structures;
and when the pixel points in the sliding window area contain the interested area, determining the position of the interested area.
25. A method of ultrasound imaging according to claim 23, wherein said determining the position of at least one pair of symmetric tissue structures in the head of the subject from three-dimensional volume data of the head of the subject comprises:
the location of at least one pair of symmetric tissue structures in the subject's head is determined according to a learning model.
26. The method of ultrasound imaging according to claim 25, wherein said determining the location of at least one pair of symmetric tissue structures in the subject's head according to a learning model comprises:
obtaining a region of interest in the three-dimensional volume data according to a first learning model;
determining a prediction region corresponding to the region of interest;
determining a location of the prediction region.
27. The method of ultrasound imaging according to claim 25, wherein said determining the location of at least one pair of symmetric tissue structures in the subject's head according to a learning model comprises:
detecting and processing the three-dimensional data according to a second learning model to determine a region of interest in the three-dimensional data;
and determining the position corresponding to the region of interest.
28. The ultrasound imaging method of any of claims 24 to 27, further comprising:
and determining the type of the organization corresponding to the region of interest.
29. A method of ultrasound imaging according to claim 23, wherein said determining the median sagittal plane of the subject's head from the locations of said at least one pair of symmetric tissue structures comprises:
determining a first alignment position of a first tissue structure of the at least one pair of symmetric tissue structures;
determining a second alignment position of a second tissue structure of the at least one pair of symmetric tissue structures;
determining a central point of a position connecting line between the first comparison position and the second comparison position;
and determining the plane which is perpendicular to the position connecting line and passes through the central point as the median sagittal plane.
30. The method of ultrasound imaging according to claim 23, wherein said subject's head includes two or more pairs of symmetric anatomical structures, said determining a median sagittal plane of said subject's head from the locations of said at least one pair of symmetric anatomical structures comprising:
determining a median sagittal plane of the subject's head from the two or more pairs of symmetric tissue structures.
31. The method of ultrasonic imaging according to claim 30, wherein said determining a midsagittal plane of the subject's head from the two or more pairs of symmetric tissue structures comprises:
acquiring the median sagittal plane of each pair of symmetrical tissue structures;
and determining the average plane of the median sagittal planes corresponding to the two or more pairs of symmetrical tissue structures as the median sagittal plane of the head of the tested body.
32. The method of ultrasonic imaging according to claim 30, wherein said determining a midsagittal plane of the subject's head from the two or more pairs of symmetric tissue structures comprises:
acquiring a central point of a position connecting line between a first tissue structure and a second tissue structure in each pair of symmetrical tissue structures;
controlling the central point of each pair of symmetrical tissue structures to be subjected to fitting operation to obtain a fitting plane;
and determining the fitting plane as a median sagittal plane of the head of the measured body.
33. The method of ultrasonic imaging according to claim 30, wherein said determining a midsagittal plane of the subject's head from the two or more pairs of symmetric tissue structures comprises:
determining a first average position of the first alignment positions of the first tissue structures on the first side in each pair of symmetrical tissue structures;
determining a second average position of second alignment positions of second tissue structures located at a second side in each pair of symmetric tissue structures;
determining a mean center point of a position connection between the first mean position and the second mean position;
and determining a plane which is perpendicular to the position connecting line and passes through the mean center point as a median sagittal plane of the head of the measured body.
34. The method of ultrasonic imaging according to claim 30, wherein said determining a midsagittal plane of the subject's head from the two or more pairs of symmetric tissue structures comprises:
determining the categories of the two or more pairs of symmetrical tissue structures, and acquiring the weight value of each category;
determining a first average position according to a first comparison position and a corresponding weight value of a first organization structure positioned at a first side in each pair of symmetrical organization structures;
determining a second average position according to a second comparison position and a corresponding weight value of a second tissue structure positioned at a second side in each pair of symmetrical tissue structures;
determining a mean center point of a position connection between the first mean position and the second mean position;
and determining a plane which is perpendicular to the mean connecting line and passes through the mean central point as a median sagittal plane of the head of the measured body.
35. The method of ultrasonic imaging according to claim 23, wherein the at least one pair of symmetric tissue structures comprises one or more of an orbit, an ear, a nostril, a corner of the mouth, and a lateral ventricle, a cerebellar hemisphere, intracranial.
36. The ultrasound imaging method of claim 23, further comprising:
and performing local correction operation on the median sagittal plane according to the characteristics of the median sagittal plane to obtain a target median sagittal plane after local correction.
37. The method of ultrasound imaging according to claim 36, wherein said performing a local correction operation on said median sagittal plane according to characteristics of said median sagittal plane, resulting in a locally corrected target median sagittal plane, comprises:
and controlling the mid-sagittal plane to carry out high-echo plane detection to obtain a corrected target mid-sagittal plane.
38. The method of ultrasound imaging according to claim 36, wherein said performing a local correction operation on said median sagittal plane according to characteristics of said median sagittal plane, resulting in a locally corrected target median sagittal plane, comprises:
determining a first sagittal plane with a preset step length in the first direction of the positive sagittal plane;
determining a second sagittal plane located in a second direction of the median sagittal plane for the preset step size, wherein the first direction is opposite the second direction and the first and second sagittal planes are both parallel to the median sagittal plane;
calculating a symmetry index of the first sagittal plane or the second sagittal plane with respect to the median sagittal plane;
determining the target median sagittal plane from the symmetry index.
39. The method of ultrasound imaging according to claim 38, wherein said first sagittal plane and said second sagittal plane include pixel points, said calculating an index of symmetry of said first sagittal plane or said second sagittal plane with respect to said median sagittal plane comprising:
determining a pixel point location of a first pixel point located on the first sagittal plane that is symmetric to the median sagittal plane;
judging whether a second pixel point located in the second sagittal plane exists in the vicinity of the pixel point position;
and when a second pixel point located on the second sagittal plane exists near the pixel point position, controlling the symmetry index of the median sagittal plane to increase a preset value.
40. A method of ultrasound imaging according to claim 39, wherein the symmetry index of the mid-sagittal plane of the object with respect to the preset step size has a preset range; said determining the target mid-sagittal plane from the symmetry index comprises:
judging whether the symmetry index of the midsagittal plane is located in a preset range corresponding to the preset step length;
when the symmetry index of the median sagittal plane is not positioned in the preset range corresponding to the preset step length, correcting the position of the median sagittal plane;
and when the symmetry index of the median sagittal plane is positioned in a preset range corresponding to the preset step length, determining the median sagittal plane as the target median sagittal plane.
41. The ultrasound imaging method of claim 23, further comprising:
controlling the midsagittal plane to be displayed within a display.
42. An ultrasound imaging system, characterized in that the ultrasound imaging system comprises:
the probe is used for acquiring three-dimensional volume data of the head of a tested object;
a processor coupled to the probe for determining a three-dimensional target profile of the subject's head from the three-dimensional volume data of the subject's head and determining a median sagittal plane of the subject's head from the three-dimensional target profile.
43. The ultrasound imaging system of claim 42, further comprising:
a display coupled to the processor, the processor configured to control the midsagittal plane to be displayed in the display.
44. An ultrasound imaging system, characterized in that the ultrasound imaging system comprises:
the probe is used for acquiring three-dimensional volume data of the head of a tested object;
a processor connected to the probe for determining the location of at least one pair of symmetrical anatomical structures in the head of the subject from the three-dimensional volume data of the head of the subject and for determining the median sagittal plane of the head of the subject from the location of the at least one pair of symmetrical anatomical structures.
45. The ultrasound imaging system of claim 44, further comprising:
a display coupled to the processor, the processor configured to control the midsagittal plane to be displayed within the display.
CN201811591966.1A 2018-12-25 2018-12-25 Ultrasonic imaging method and system Active CN111368586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811591966.1A CN111368586B (en) 2018-12-25 2018-12-25 Ultrasonic imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811591966.1A CN111368586B (en) 2018-12-25 2018-12-25 Ultrasonic imaging method and system

Publications (2)

Publication Number Publication Date
CN111368586A true CN111368586A (en) 2020-07-03
CN111368586B CN111368586B (en) 2021-04-20

Family

ID=71208090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811591966.1A Active CN111368586B (en) 2018-12-25 2018-12-25 Ultrasonic imaging method and system

Country Status (1)

Country Link
CN (1) CN111368586B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258533A (en) * 2020-10-26 2021-01-22 大连理工大学 Method for segmenting earthworm cerebellum in ultrasonic image
CN113781547A (en) * 2021-08-05 2021-12-10 沈阳先进医疗设备技术孵化中心有限公司 Head symmetry axis identification method and device, storage medium and computer equipment
WO2022099705A1 (en) * 2020-11-16 2022-05-19 深圳迈瑞生物医疗电子股份有限公司 Early-pregnancy fetus ultrasound imaging method and ultrasound imaging system
WO2022099704A1 (en) * 2020-11-16 2022-05-19 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and ultrasonic imaging system of fetus in middle and late pregnancy
WO2022141085A1 (en) * 2020-12-29 2022-07-07 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic detection method and ultrasonic imaging system
WO2023133933A1 (en) * 2022-01-14 2023-07-20 汕头市超声仪器研究所股份有限公司 Ultrasonic brain standard plane imaging and abnormal area automatic detection and display method
WO2023133929A1 (en) * 2022-01-14 2023-07-20 汕头市超声仪器研究所股份有限公司 Ultrasound-based human tissue symmetry detection and analysis method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7986823B2 (en) * 2007-05-14 2011-07-26 Siemens Aktiengesellschaft System and method for consistent detection of mid-sagittal planes for magnetic resonance brain scans
CN102930602A (en) * 2012-10-20 2013-02-13 西北大学 Tomography-image-based facial skin three-dimensional surface model reconstructing method
US20140371591A1 (en) * 2011-12-22 2014-12-18 Samsung Electronics Co., Ltd. Method for automatically detecting mid-sagittal plane by using ultrasound image and apparatus thereof
CN104414680A (en) * 2013-08-21 2015-03-18 深圳迈瑞生物医疗电子股份有限公司 Three-dimensional ultrasonic imaging method and system
CN104757993A (en) * 2014-01-07 2015-07-08 三星麦迪森株式会社 Method and medical imaging apparatus for displaying medical images
EP2982306A1 (en) * 2014-08-05 2016-02-10 Samsung Medison Co., Ltd. Ultrasound diagnosis apparatus
US20160045152A1 (en) * 2014-08-12 2016-02-18 General Electric Company System and method for automated monitoring of fetal head descent during labor
CN105894508A (en) * 2016-03-31 2016-08-24 上海联影医疗科技有限公司 Method for evaluating automatic positioning quality of medical image
CN106102585A (en) * 2015-02-16 2016-11-09 深圳迈瑞生物医疗电子股份有限公司 The display processing method of three-dimensional imaging data and 3-D supersonic imaging method and system
CN107106143A (en) * 2015-05-07 2017-08-29 深圳迈瑞生物医疗电子股份有限公司 3-D supersonic imaging method and apparatus
CN107203997A (en) * 2016-03-16 2017-09-26 上海联影医疗科技有限公司 A kind of dividing method of half brain of left and right
CN109009226A (en) * 2018-07-25 2018-12-18 深圳大图科创技术开发有限公司 A kind of method of 3-D supersonic imaging

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7986823B2 (en) * 2007-05-14 2011-07-26 Siemens Aktiengesellschaft System and method for consistent detection of mid-sagittal planes for magnetic resonance brain scans
US20140371591A1 (en) * 2011-12-22 2014-12-18 Samsung Electronics Co., Ltd. Method for automatically detecting mid-sagittal plane by using ultrasound image and apparatus thereof
CN102930602A (en) * 2012-10-20 2013-02-13 西北大学 Tomography-image-based facial skin three-dimensional surface model reconstructing method
CN104414680A (en) * 2013-08-21 2015-03-18 深圳迈瑞生物医疗电子股份有限公司 Three-dimensional ultrasonic imaging method and system
CN104757993A (en) * 2014-01-07 2015-07-08 三星麦迪森株式会社 Method and medical imaging apparatus for displaying medical images
EP2982306A1 (en) * 2014-08-05 2016-02-10 Samsung Medison Co., Ltd. Ultrasound diagnosis apparatus
US20160045152A1 (en) * 2014-08-12 2016-02-18 General Electric Company System and method for automated monitoring of fetal head descent during labor
CN106102585A (en) * 2015-02-16 2016-11-09 深圳迈瑞生物医疗电子股份有限公司 The display processing method of three-dimensional imaging data and 3-D supersonic imaging method and system
CN107106143A (en) * 2015-05-07 2017-08-29 深圳迈瑞生物医疗电子股份有限公司 3-D supersonic imaging method and apparatus
CN107203997A (en) * 2016-03-16 2017-09-26 上海联影医疗科技有限公司 A kind of dividing method of half brain of left and right
CN105894508A (en) * 2016-03-31 2016-08-24 上海联影医疗科技有限公司 Method for evaluating automatic positioning quality of medical image
CN109009226A (en) * 2018-07-25 2018-12-18 深圳大图科创技术开发有限公司 A kind of method of 3-D supersonic imaging

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BABAK A. ARDEKANI等: "Automatic Detection of the Mid-Sagittal Plane in 3-D Brain Images", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
IHAR VOLKAU等: "Extraction of the midsagittal plane from morphological neuroimages using the Kullback–Leibler’s measure", 《MEDICAL IMAGE ANALYSIS》 *
YUE LI等: "AUTOMATED CORPUS CALLOSUM SEGMENTATION IN MIDSAGITTAL BRAIN MR IMAGES", 《ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING》 *
易艳等: "智能三维超声成像系统在获取胎儿颅脑正中矢状面的临床应用研究", 《中华超声影像学杂志》 *
武玺宁等: "胎儿颜面轮廓线的产前超声应用研究", 《中华医学超声杂志(电子版)》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258533A (en) * 2020-10-26 2021-01-22 大连理工大学 Method for segmenting earthworm cerebellum in ultrasonic image
CN112258533B (en) * 2020-10-26 2024-02-02 大连理工大学 Method for segmenting cerebellum earthworm part in ultrasonic image
WO2022099705A1 (en) * 2020-11-16 2022-05-19 深圳迈瑞生物医疗电子股份有限公司 Early-pregnancy fetus ultrasound imaging method and ultrasound imaging system
WO2022099704A1 (en) * 2020-11-16 2022-05-19 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and ultrasonic imaging system of fetus in middle and late pregnancy
WO2022141085A1 (en) * 2020-12-29 2022-07-07 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic detection method and ultrasonic imaging system
CN113781547A (en) * 2021-08-05 2021-12-10 沈阳先进医疗设备技术孵化中心有限公司 Head symmetry axis identification method and device, storage medium and computer equipment
WO2023133933A1 (en) * 2022-01-14 2023-07-20 汕头市超声仪器研究所股份有限公司 Ultrasonic brain standard plane imaging and abnormal area automatic detection and display method
WO2023133929A1 (en) * 2022-01-14 2023-07-20 汕头市超声仪器研究所股份有限公司 Ultrasound-based human tissue symmetry detection and analysis method

Also Published As

Publication number Publication date
CN111368586B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN111368586B (en) Ultrasonic imaging method and system
US11229419B2 (en) Method for processing 3D image data and 3D ultrasonic imaging method and system
US11534134B2 (en) Three-dimensional ultrasound imaging method and device
US7783095B2 (en) System and method for fetal biometric measurements from ultrasound data and fusion of same for estimation of fetal gestational age
CN106659473B (en) Ultrasonic imaging apparatus
US20120078102A1 (en) 3-dimensional (3d) ultrasound system using image filtering and method for operating 3d ultrasound system
Hu et al. Automated placenta segmentation with a convolutional neural network weighted by acoustic shadow detection
CN115004223A (en) Method and system for automatic detection of anatomical structures in medical images
CN115429325A (en) Ultrasonic imaging method and ultrasonic imaging equipment
CN112508902A (en) White matter high signal grading method, electronic device and storage medium
TWI697010B (en) Method of obtaining medical sagittal image, method of training neural network and computing device
US20220249060A1 (en) Method for processing 3d image data and 3d ultrasonic imaging method and system
KR102483122B1 (en) System and method for determining condition of fetal nervous system
WO2020133236A1 (en) Spinal imaging method and ultrasonic imaging system
CN113017695A (en) Ultrasound imaging method, system and computer readable storage medium
CN111862014A (en) ALVI automatic measurement method and device based on left and right ventricle segmentation
KR101024857B1 (en) Ultrasound system and method for performing color modeling processing on three-dimensional ultrasound image
Nirmala et al. Measurement of nuchal translucency thickness in first trimester ultrasound fetal images for detection of chromosomal abnormalities
CN112617899A (en) Ultrasound imaging method, system and computer readable storage medium
WO2022134049A1 (en) Ultrasonic imaging method and ultrasonic imaging system for fetal skull
CN114708973B (en) Device and storage medium for evaluating human health
Alzubaidi et al. Conversion of Pixel to Millimeter in Ultrasound Images: A Methodological Approach and Dataset
Chaudhari et al. The Automated Screening of Ultrasound Images for Nuchal Translucency using Auxiliary U-Net for Semantic Segmentation
CN116762093A (en) Ultrasonic detection method and ultrasonic imaging system
WO2020133124A1 (en) Medical sagittal plane image acquisition method, neural network training method, and computer apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200703

Assignee: Shenzhen Mindray Animal Medical Technology Co.,Ltd.

Assignor: SHENZHEN MINDRAY BIO-MEDICAL ELECTRONICS Co.,Ltd.

Contract record no.: X2022440020009

Denomination of invention: Ultrasound imaging method and system

Granted publication date: 20210420

License type: Common License

Record date: 20220804