WO2021120065A1 - Procédé de mesure automatique et système d'imagerie ultrasonore pour structure anatomique - Google Patents

Procédé de mesure automatique et système d'imagerie ultrasonore pour structure anatomique Download PDF

Info

Publication number
WO2021120065A1
WO2021120065A1 PCT/CN2019/126388 CN2019126388W WO2021120065A1 WO 2021120065 A1 WO2021120065 A1 WO 2021120065A1 CN 2019126388 W CN2019126388 W CN 2019126388W WO 2021120065 A1 WO2021120065 A1 WO 2021120065A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
measurement
recognition
measurement target
ultrasound
Prior art date
Application number
PCT/CN2019/126388
Other languages
English (en)
Chinese (zh)
Inventor
邹耀贤
林穆清
王泽兵
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to PCT/CN2019/126388 priority Critical patent/WO2021120065A1/fr
Priority to CN202011506495.7A priority patent/CN112998755A/zh
Publication of WO2021120065A1 publication Critical patent/WO2021120065A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0858Detecting organic movements or changes, e.g. tumours, cysts, swellings involving measuring tissue layers, e.g. skin, interfaces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4444Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data

Definitions

  • This application relates to the field of medical equipment, and more specifically to an automatic measurement method of anatomical structure and an ultrasound imaging system.
  • Ultrasound measurement is a common method to obtain the size of tissues or lesions.
  • many ultrasound manufacturers have integrated automatic measurement algorithms. For example, for obstetric measurement, many manufacturers support head circumference, double parietal diameter, abdominal circumference, The automatic measurement of commonly used measurement items such as femoral length has made a great contribution to the improvement of clinical examination efficiency.
  • multi-window mode is a commonly used image display method.
  • multiple ultrasound images mainly dual-windows
  • the active window there will be a switch button that allows the user to activate a certain window (hereinafter referred to as the active window), and the image will be scanned in this window in real time, and the remaining windows will display the scanned image.
  • the multi-window mode often brings great troubles.
  • the system does not know which window image the user wants to measure.
  • the method is that in the multi-window mode, which window is the current active window, the image of which window is automatically measured. This requires the user to scan an image and immediately perform the measurement, otherwise the automatically measured image is not the image that the doctor wants.
  • some doctors are accustomed to lay out all the slices and then perform unified measurements. This requires the user to switch windows to measure, but increases the operation steps.
  • the present application provides an automatic measurement method and ultrasound imaging system for anatomical structures, which can obtain an ultrasound image with the measurement item to be measured according to the measurement item to be measured, and perform automatic measurement on the measurement item to be measured in the ultrasound image, Improve the efficiency of automatic measurement of anatomical structures.
  • an embodiment of the present application provides an automatic measurement method of an anatomical structure, including:
  • an ultrasound image is acquired, the ultrasound image is related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item;
  • the measurement item obtaining step is to obtain the measurement item to be measured of the anatomical structure
  • a positioning step positioning the measurement target in the recognition image
  • the measurement target is measured.
  • an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
  • an ultrasound image is acquired, the ultrasound image is related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item;
  • the measurement item obtaining step is to obtain the measurement item to be measured of the anatomical structure
  • the measurement target in the recognition image is measured.
  • an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
  • a recognition image containing a measurement target corresponding to a measurement item is recognized from the ultrasound image, wherein the measurement item is a measurement item to be measured, and the recognition image is at least one image in the ultrasound image ;
  • the positioning step is to locate the measurement target in the recognition image.
  • the measurement target is measured.
  • an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
  • a recognition image containing a measurement target corresponding to a measurement item is recognized from the ultrasound image, wherein the measurement item is a measurement item to be measured, and the recognition image is at least one image in the ultrasound image ;
  • the measurement target in the recognition image is measured.
  • An embodiment of the present application also provides an ultrasound imaging system, including:
  • Ultrasound probe used to transmit ultrasonic waves to biological tissues and receive ultrasonic echoes to obtain ultrasonic echo signals
  • a processor configured to process the ultrasonic echo signal to obtain an ultrasonic image of the biological tissue
  • the memory is used to store executable program instructions
  • a processor configured to execute the executable program instructions, so that the processor executes the automatic measurement method described in any one of the first aspect to the fourth aspect.
  • the embodiments of the present application provide an automatic measurement method of anatomical structure and an ultrasound imaging system. According to the measurement item to be measured, the ultrasound image containing the measurement item to be measured is identified, and the measurement item to be measured in the ultrasound image is identified. Automatic measurement improves the efficiency of measuring anatomical structures.
  • Fig. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application
  • FIG. 2 shows a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application
  • Fig. 3 shows a schematic flowchart of an automatic measurement step in an automatic measurement method of an anatomical structure according to an embodiment of the present application
  • Fig. 4 shows a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application
  • Fig. 5 shows a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application
  • Fig. 6 shows a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application.
  • Fig. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application.
  • the ultrasound imaging system 100 provided in this embodiment includes an ultrasound probe 101, a processor 102, a memory 103 and a display 104.
  • the ultrasonic probe 101 is used to transmit ultrasonic waves to biological tissues and receive ultrasonic echoes to obtain ultrasonic echo signals.
  • the processor 102 is configured to process the ultrasound echo signals to obtain an ultrasound image of the anatomical structure of the biological tissue, and automatically measure the anatomical structure of the biological tissue based on the ultrasound image.
  • the memory 103 stores executable computer program instructions.
  • the processor 102 executes the executable computer program instructions, the processor 102 performs automatic measurement of the anatomical structure to obtain a measurement result, for example, a measurement result of a measurement target.
  • the display 104 is used to display the ultrasound image, the measurement result measured by the processor, the measurement target, the measurement item to be measured, and the like.
  • the method for automatically measuring anatomical structures performed by the processor of this embodiment provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating the measurement item to be measured.
  • the acquired ultrasound image of the anatomical structure automatically recognizes the recognition image with the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement target in the recognition image.
  • the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image.
  • the measurement simplifies the operation process and improves the efficiency of measuring anatomical structures.
  • FIG. 2 a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application is shown.
  • the automatic measurement method of anatomical structure is used to automatically measure the anatomical structure of biological tissue after processing the ultrasonic echo, as shown in FIG. 2.
  • the method includes:
  • Step S11 an image acquisition step, acquiring an ultrasound image, the ultrasound image being related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item.
  • the anatomical structure has at least one measurement item.
  • the measurement items such as double parietal diameter, head circumference, abdominal circumference, and femoral length that need to be measured; during abdominal ultrasound examination, the anatomical structure of the liver and kidney of the subject’s abdomen is observed. They correspond to the measurement items of liver size and kidney size respectively.
  • the anatomical structure of the corresponding biological tissue has different measurement items.
  • the processor 102 processes the ultrasonic echo acquired by the ultrasonic probe 101 to generate an ultrasonic image. In one embodiment, the processor 102 processes the ultrasonic echo acquired by the ultrasonic probe 101, generates an ultrasonic image to be stored in the memory 103, and obtains information from the ultrasonic image that has been stored in the memory 103 during the image acquisition step.
  • An ultrasound image of the anatomical structure of the biological body, and the ultrasound image and the processing result are displayed on the display 104.
  • at least one ultrasound image about the anatomical structure is acquired.
  • the display 104 displays one or more ultrasound images about the anatomical structure of the biological tissue.
  • ultrasound images about the head of the fetus are displayed on the display 104 at the same time.
  • ultrasound images related to different anatomical structures of the biological tissue are displayed on the display 104, for example, ultrasound images related to the head of the fetus and the abdomen of the fetus are simultaneously displayed on the display 104.
  • the display 104 has multiple display windows, and each display window displays one or more ultrasound images about the anatomy. For example, there are two display windows on the display 104, one of which displays information about the fetus. Ultrasound image of the head, one of the display windows shows the ultrasound image of the fetus' abdomen.
  • measurement items and measurement results related to the anatomical structure may also be displayed.
  • Step S12 a measurement item obtaining step, obtaining a measurement item to be tested.
  • the measurement item acquisition step (S12) the measurement items that need to be measured regarding the anatomical structure of the biological tissue are acquired.
  • the measurement items of each of at least one anatomical structure of the biological tissue are displayed on the display 104.
  • the processor 102 performs the measurement item acquisition step by receiving the measurement item to be measured input by the user.
  • the processor 102 is connected to an input device, and the user selects the measurement item to be measured from the display 104 through an instruction input by the input device.
  • the display 104 displays the measurement items of the fetal head, fetal abdomen, and placenta during the process of scanning the pregnant woman's abdomen, including: biparietal diameter, occipital frontal diameter, and head circumference , Abdominal circumference, femur length, humerus length, placenta thickness, abdominal transverse diameter, abdominal thickness diameter, neck folds and other measurement items.
  • the user needs to measure the length of the femur of the fetus, and input the instruction to measure the length of the femur through the input device communicatively connected with the processor 102.
  • the processor 102 receives the instruction that the user needs to measure the length of the femur, thereby obtaining the measurement item to be measured.
  • the measurement item is the length of the femur.
  • the anatomical structure has a measurement item, which is acquired as the measurement item to be measured in the measurement item acquisition step (S12).
  • the anatomical structure has two or more measurement items, and at least one of the two or more measurement items can be separately obtained as the measurement to be measured in the measurement item obtaining step (S12). Item, two or more of the two or more measurement items may also be used as the measurement items to be measured at the same time. For example, during the obstetric ultrasound examination, the ultrasound image about the fetus acquired in the image acquisition step, the fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femur length.
  • the user The dual parietal diameter is measured by inputting the input device in communication with the processor 102 to obtain the dual parietal diameter to be measured, or the user can simultaneously measure the dual parietal diameter, head circumference, and femoral length through the input in communication with the processor 102.
  • One instruction so as to obtain the three measurement items of double parietal diameter, head circumference, and femur length at the same time.
  • Step S13 an image recognition step, recognizing a recognition image containing a measurement target corresponding to the measurement item to be measured from the ultrasound image, and the recognition image is at least one of the ultrasound images.
  • the ultrasound image about the anatomical structure acquired in the image acquisition step is identified, whether it contains the measurement target corresponding to the measurement item specified by the user, and the measurement item specified by the user is included
  • the corresponding measurement target is the recognition image, and the ultrasonic image that does not contain the measurement target corresponding to the measurement item designated by the user is discarded, and the following automatic measurement step is not entered.
  • the measurement item of is used as the measurement item to be measured for measurement, wherein the ultrasound image may be a cross-sectional image or a three-dimensional image.
  • the doctor first lays out the cut surfaces of all tissues and then performs unified measurement, or displays the cut surfaces of the tissues in different windows during the inspection process. Take automatic measurements.
  • the user is required to manually identify the ultrasound image containing the measurement target corresponding to the measurement item to be measured and then perform the measurement. For example, manually switch the window or select the active window to perform the measurement.
  • This process increases user operations.
  • the image recognition step (S13) automatically recognizes the recognition image containing the measurement target corresponding to the measurement item to be measured specified by the user, without requiring the user to manually recognize the ultrasonic image containing the measurement target corresponding to the measurement item to be measured , Automatically measure the automatically recognized image, reduce user operations and improve measurement efficiency.
  • the ultrasound image about the fetus is acquired.
  • the fetus has three measurement items to be measured: double parietal diameter, head circumference, and femoral length.
  • the double parietal diameter corresponds to the fetal head
  • the measurement target of the parietal bones on both sides of the fetus and the head circumference correspond to the measurement target of the occipital bone of the fetal head to the forehead nasal root
  • the length of the femur corresponds to the measurement target of the fetal femur.
  • the acquisition user needs to measure the fetal femur length, and the ultrasound image about the fetus contains the ultrasound image including the head and the ultrasound image including the femur, which is required in the image recognition step
  • the ultrasonic image including the head and the ultrasonic image including the femur are recognized to identify the ultrasonic image including the femur, and the subsequent automatic measurement step is performed on the recognized ultrasonic image including the femur.
  • the recognition image is a part of an ultrasound image.
  • it may be a partial area of a certain ultrasound image.
  • the ultrasound image about the fetal head is acquired.
  • the fetal head has two measurement items to be measured: double parietal diameter and head circumference.
  • the double parietal diameter corresponds to the fetal head
  • the measurement target of the parietal bones on both sides and the head circumference correspond to the measurement target of the occipital bone of the fetus head to the root of the forehead and nose.
  • the measurement item of double parietal diameter is obtained in the measurement item acquisition step, and the ultrasonic image recognized in the image recognition step Only the area of the parietal bones on both sides of the fetus’s head is included.
  • one or more measurement items to be measured are obtained in the measurement item acquisition step (S12), and multiple recognition images are recognized in the image recognition step (S13), wherein at least two recognition images contain the same measurement item to be tested.
  • the measurement target corresponding to the measurement item in the subsequent automatic measurement step, the measurement target corresponding to the measurement item to be measured in each recognition image is measured, and the measurement target in each recognition image is measured. The results are averaged, and the average is the measurement result for the measurement item to be tested.
  • the image acquisition step multiple ultrasound images about the fetus are acquired.
  • the user needs to measure the fetal femur length.
  • the recognition step two ultrasound images including the femur are identified from multiple ultrasound images, and both of the ultrasound images including the femur are recognized images, and then the recognized ultrasound images including the femur are automatically measured.
  • the two identification images are measured, and the measurement results of the two identification images are averaged, and the average value is the measurement result for the measurement item to be measured.
  • the final measurement result may also be determined by weighting or calculating the variance, which is not specifically limited here.
  • At least two ultrasound images related to the anatomical structure of the biological tissue are acquired in the image acquisition step (S11); the measurement item acquisition step (S12) acquires one or more measurement items of the aforementioned anatomical structure; the image recognition step In (S13), the ultrasound images are classified according to the measurement items of each of the at least one anatomical structure of the biological tissue, so as to obtain the ultrasound image containing the measurement target corresponding to the measurement item; Select the ultrasound image with the measurement target corresponding to the measurement item to be measured from the ultrasound images.
  • the foregoing step of classifying the ultrasound image includes: comparing the image feature of the ultrasound image with the image feature of the database image in a preset database, wherein the database image contains at least one of the The measurement target corresponding to the measurement item of at least one of the anatomical structures of the biological tissue, when the image feature of the ultrasound image matches the image feature of the database image, the ultrasound image contains the The measurement target contained in the database image.
  • the image acquisition step (S11) For example, in obstetric ultrasound inspection, multiple ultrasound images are acquired in the image acquisition step (S11), including ultrasound images of the fetus head and ultrasound images of the fetus abdomen.
  • the measurement items of the fetal head include: double parietal diameter, head circumference, etc.
  • the measurement items of the fetal abdomen include: abdominal circumference, abdominal transverse diameter, and abdominal thickness diameter.
  • the measurement item to be measured obtained in the measurement item acquisition step (S12) is the head circumference.
  • the image acquisition step (S11) is based on the measurement items on the head of the fetus and the measurement items on the abdomen of the fetus.
  • the acquired multiple ultrasound images are classified, and the measurement target containing the parietal regions on both sides of the fetal head corresponding to the double parietal diameter and the measurement target of the head circumference corresponding to the occipital bone of the fetal head to the forehead nasal root region are obtained.
  • an ultrasound image containing the measurement target of the abdomen corresponding to the abdominal circumference, the transverse diameter of the abdomen, and the thickness of the abdomen is compared with the image features of the database image in the preset database.
  • the database image contains at least the measurement targets (such as the parietal bones on both sides of the head, the occipital bones to the forehead, nose, and abdomen).
  • the ultrasound image corresponds to the measurement target contained in the database image.
  • the measurement item to be measured acquired in the measurement item acquisition step (S12) is directly used as the head circumference, and from the classified ultrasound images Obtaining an ultrasound image from the occipital bone of the head to the root of the forehead and nose is the recognition image.
  • the multiple ultrasound images acquired in the image acquisition step (S11) are classified in the image recognition step (S13) on the display 104 and the ultrasound images containing measurement targets corresponding to different measurement items are displayed.
  • the recognition image recognized in the image recognition step (S13) is displayed on the display 104.
  • the processor 102 is connected to the input device, the user selects the measurement item to be measured from the display 104 through the instruction input by the input device, and the processor 102 obtains the image acquisition step (S11) according to the measurement item to be measured selected by the user.
  • the recognized recognition images are displayed on the display 104.
  • multiple ultrasound images are displayed on the display 104, and at the same time, the recognition images are displayed in a manner that distinguishes other ultrasound images, for example, displayed in a highlighted manner.
  • the two ultrasound images of the fetus in the image acquisition step (S11) are displayed on the display 104, one of which contains the fetal head and the occipital bone to the forehead and nasal root region, and the other contains the fetus.
  • the processor 102 For the abdomen, according to the user's instruction to measure the head circumference of the fetus, the processor 102 performs the image recognition step (S13) according to the measurement head circumference selected by the user, and then converts the identified head occipital bone to the forehead nasal root.
  • the recognition image of the measurement target of the area is displayed on the display 104 in such a way that it is distinguished from the ultrasound image including the abdomen.
  • the image recognition step (S13) includes: comparing the image feature of the ultrasound image with the image feature of the database image containing the measurement item to be measured in the preset database, and judging the image feature of the ultrasound image and the image of the database image Whether the feature matches, when the image feature of the ultrasound image matches the image feature of the database image, it is determined that this ultrasound image is a recognition image containing the measurement target corresponding to the measurement item to be measured.
  • the database image included in the preset database is an image calibrated for the measurement items of the anatomical structure, and it contains measurement targets corresponding to the measurement items of the anatomical structure.
  • At least two ultrasound images related to the anatomical structure of the biological tissue are acquired in the image acquisition step (S11); the measurement item acquisition step (S12) acquires one measurement item of the aforementioned anatomical structure; the image recognition step (S13) Compare each ultrasound image acquired in the image acquisition step with the database image of the measurement target corresponding to the aforementioned measurement item in the preset database, and determine whether the image feature of the currently compared ultrasound image matches the image feature of the database image, If it matches, it is determined that the currently compared ultrasound image is an identification image; if it does not match, the ultrasound image is discarded, thereby determining the identification image containing the measurement target corresponding to the measurement item to be measured from the multiple ultrasound images.
  • the image acquisition step (S11) two ultrasound images of human organs are acquired.
  • Human organs include liver, kidney, etc., where the liver has a measurement item for liver size, and the kidney has a measurement for kidney size.
  • the measurement item to be measured acquired in the measurement item acquisition step (S12) is the size of the liver
  • each of the two ultrasound images of the human organs acquired in the image acquisition step (S11) Compare with the database image containing the liver in the preset database (also the ultrasound image about the liver), determine whether the image feature of the ultrasound image matches the image feature of the database image, if it matches, it will be determined as the recognition image, if it does not match , It will be discarded.
  • the image acquisition step (S11) two or more ultrasound images related to the anatomical structure of the biological tissue are acquired, the biological tissue has one or more anatomical structures, and one of the one or more anatomical structures There are two or more measurement items; the measurement item to be measured obtained in the measurement item acquisition step (S12) is two or more of the two or more measurement items of the aforementioned anatomical structure; the image recognition step (S13) also Including: classifying the recognition image to obtain the recognition image containing the measurement target corresponding to each measurement item to be measured.
  • the process of matching the image characteristics of the ultrasound image with the image characteristics of the database image It is also necessary to classify the recognition images according to the measurement items to be tested to obtain the recognition image of the measurement target corresponding to each measurement item to be tested.
  • the fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femoral length.
  • the measurement items to be measured acquired in (S12) are head circumference and abdominal circumference.
  • each of the ultrasound images of two or more fetuses of the fetus acquired in the image acquisition step (S11) is separately Compare with the database image of the head and abdomen (also about the ultrasound image of the fetus) in the preset database, where the comparison process is also based on the image features of the database image containing the head and the image feature of the database image containing the abdomen
  • the recognition images are classified to determine the head recognition image corresponding to the measurement item of head circumference and the abdomen recognition image corresponding to the measurement item of abdominal circumference.
  • a machine learning algorithm is used to learn the image features of the database images in the preset database that can distinguish different measurement items.
  • the machine learning method is used to extract the image features of the ultrasound image acquired in the image acquisition step (S11), and The learned image feature of the database image is matched with the image feature of the ultrasound image, and the ultrasound image matching the learned image feature is obtained as the recognition image.
  • the ultrasound images are classified according to the learned image characteristics that can distinguish different measurement items, so as to classify the ultrasound images according to the measurement items of the anatomical structure, so that the ultrasound images are classified with each other.
  • the corresponding recognition image of the measurement item to be tested is recognized.
  • the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc. Match the image features of the ultrasound images extracted by the machine learning algorithm with the image features in the preset database to classify the ultrasound images.
  • the classification discriminators used include but are not limited to nearest neighbor (KNN), Support vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
  • a deep learning method is used to construct a stacked convolutional layer and a fully connected layer to learn image features of database images in a preset database, and learn image features that can distinguish different measurement items.
  • Feature recognition is the recognition image in the ultrasound image.
  • the ultrasound images are classified according to the learned image characteristics, the images with the aforementioned measurement items that can be distinguished in the ultrasound images are recognized, and the recognized images are the recognition images.
  • Deep learning methods include, but are not limited to, VGG network, ResNet residual network, Inception module, AlexNe ot deep network, etc.
  • Step S14 an automatic measurement step, measuring the measurement target in the recognition image.
  • the measurement target in the recognition image is measured to obtain the measurement result of the anatomical structure.
  • the measurement methods are different. For example, in obstetric ultrasound testing, the measurement of head circumference usually uses an ellipse to wrap the fetal skull halo, and the abdominal circumference uses an ellipse to wrap Live the abdomen of the fetus, and measure the distance between the two ends of the femur with a line segment for the femur length.
  • the target fitting method is used for automatic measurement.
  • the automatic measurement step (S14) includes:
  • Step S141 Extract the contour of the measurement target corresponding to the measurement item to be measured by using an edge detection algorithm.
  • Edge detection algorithms include, but are not limited to: using Sobel operator, canny operator, etc., to detect the contour of the measurement item to be measured based on the pixel points and gray-scale weighted values of the ultrasound image.
  • Step S142 Fit the contour of the measurement target corresponding to the measurement item to be measured to obtain a fitting equation corresponding to the measurement item to be measured.
  • Detecting algorithms such as straight lines, circles, ellipses, etc. are used to fit the contours of the measurement items to be measured to obtain the fitting equations.
  • Fitting algorithms include but are not limited to least squares estimation, Hough transform, Randon transform, Ransac and other algorithms.
  • Step S143 Determine the measurement result through the fitting equation.
  • the measurement result is determined according to the fitting equation obtained by the fitting algorithm in the above steps. If the fitting equation circle or ellipse equation is obtained in the above steps, it is the result of automatic measurement. If the fitting equation obtained in the above steps is a straight line, the end point can be further located by combining the gray change of the end point of the measurement item to be measured to realize automatic measurement. Taking the measurement of femur length in obstetric ultrasound testing as an example, the femur appears as a bright linear structure. After detecting the straight line where the femur is located, two points with the largest gray gradient of the femur can be detected on the straight line as the two end points of the femur.
  • the measurement result measured in the automatic measurement step (S14) is displayed on the display 104.
  • the processor 102 performs an automatic measurement step (S14) on the recognition image recognized in the image recognition step (S13), and displays the measurement result on the recognition image displayed on the display 104. For example, when performing an obstetric ultrasound inspection, the processor 102 performs an image recognition step (S13) according to the measurement head circumference selected by the user, and then recognizes the head occipital bone to the forehead nasal root region corresponding to the measurement item including the head circumference.
  • the recognition image of this measurement target is displayed on the display 104 in a way that highlights the ultrasonic image of the measurement target that is different from the parietal region on both sides of the fetal head corresponding to the measurement item of double parietal diameter, and the subsequent automatic measurement step (The specific numerical value of the head circumference obtained in S14) is displayed in the upper right corner of the recognition image with the measurement item of head circumference recognized in the image recognition step (S13).
  • the method for automatically measuring anatomical structures by a processor provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating a measurement item to be measured.
  • a method for automatically measuring an anatomical structure based on the user designating a measurement item to be measured is provided.
  • the acquired ultrasound image of the anatomical structure automatically recognizes the recognition image containing the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement target in the recognition image.
  • the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image.
  • the measurement simplifies the operation process and improves the efficiency of measuring anatomical structures.
  • a positioning step is added after the image recognition step to eliminate the influence of the surrounding structure of the measurement target on the measurement result in the measurement step.
  • FIG. 4 a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application is shown, in which an image recognition step (S21), a measurement item acquisition step (S22), and an image recognition step (S23) are shown. It is consistent with the image recognition step (S11), the measurement item acquisition step (S12) and the image recognition step (S13) shown in FIG. 2, except that the positioning step (S24) is added after the image recognition step (S23), The measurement target is measured in the automatic measurement step (S25). The measurement target in the positioning step (S24) and the automatic measurement step shown in FIG. 4 will be described in detail below.
  • Step S24 a positioning step, positioning the measurement target in the recognition image.
  • the above image recognition step (S13) only the recognition image containing the measurement target corresponding to the measurement item to be measured is obtained, and the position corresponding to the measurement target in the actual measurement is not known, and the measurement target in the recognition image is directly determined.
  • the measurement requires the detection of the entire image, and the obtained edge detection result is easily affected by the structure around the measurement target. For this reason, in the positioning step (S24), the measurement target is positioned, and then the measurement target is fitted to the measurement target in the automatic measurement step, which can reduce the influence of the surrounding structure of the measurement item and make the measurement result more accurate.
  • the image feature of the recognition image is compared and analyzed with the image feature of the database image containing the measurement target corresponding to the measurement item to be measured in the preset database, so as to The measurement target is obtained by locating the measurement target in the recognition image, wherein the database image contains a calibration result corresponding to the measurement target, and the measurement target is an area consistent with the calibration result.
  • the calibration result includes the ROI box of the measurement target corresponding to the measurement target.
  • the calibration result includes the ROI box of the measurement target corresponding to the measurement item to be measured.
  • the positioning step includes: extracting the image features in the sliding window by using a method based on the sliding window, comparing the image features in the sliding window with the image features of the calibration result, and judging the difference between the image features in the sliding window and Whether the image feature of the calibration result matches, and when the image feature in the sliding window matches the image feature of the calibration result, it is determined that the current sliding window is the measurement target.
  • a machine learning algorithm is used to learn the image features in the ROI box of the calibration result of the database image in the preset database, where the learned image features in the ROI box are the measured The image feature of the calibration result that distinguishes the ROI area and non-ROI area of the target.
  • a machine learning algorithm is used to extract the image features in the sliding window obtained when the sliding window traversal is performed on the recognition image recognized in the image recognition step (S13).
  • the method of machine learning algorithm to extract features includes but not limited to principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
  • the calibration result includes the ROI frame of the measurement target corresponding to the measurement item to be measured
  • the positioning step includes: according to the calibration result in the database image containing the measurement target
  • the frame regression processing is performed on the recognized image to obtain a frame area, and the frame area is the measurement target.
  • a deep learning method is used to construct a stacked base convolutional layer and a fully connected layer in the ROI box of the calibration result of the database image containing the measurement target corresponding to the measurement item in the preset database.
  • Image feature learning and parameter regression the learned image feature in the ROI frame is the image feature of the calibration result that distinguishes the ROI area and the non-ROI area of the measurement target.
  • the neural network algorithm directly returns the border area of interest in the recognized image, and this border area is the measurement target to be measured.
  • neural network algorithms include but are not limited to R-CNN, Fast R-CNN, Faster-RCNN, SSD, YOLO and other detection algorithms.
  • the calibration result includes a mask for accurately segmenting the measurement target
  • the positioning step includes: according to image characteristics of the calibration result in the database image containing the measurement target , Using a semantic segmentation algorithm to identify the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image.
  • a deep learning method is used to perform end-to-end semantic segmentation to segment the recognition image. Specifically, construct a stacked base convolutional layer or use a deconvolutional layer to sample a mask that accurately divides the measurement target corresponding to the measurement item in the preset database, and directly obtain the AND contained in the recognition image according to the sampling result. The segmentation mask of the measurement target corresponding to the measurement item to be tested.
  • measuring a measurement target (S25) includes performing target fitting on the measurement target to obtain a fitting equation of the measurement target; and determining the measurement result of the measurement target through the fitting equation.
  • the calibration result in the positioning step (S24) includes the ROI frame of the measurement target corresponding to the measurement item to be measured.
  • the frame area of the measurement target is located in the recognition image.
  • Target fitting is performed on the measurement target inside, and the fitted line, circle or ellipse equations are obtained, and the measurement results are obtained by calculating the aforementioned equations. Performing target fitting on the measurement target in the frame area of the measurement target can reduce the interference of other structures outside the frame area on the target fitting, and improve the accuracy of the measurement result.
  • the calibration result in the positioning step (S24) includes a mask for accurately segmenting the measurement target corresponding to the measurement item, and the recognition image is compared with the calibration in the process of measuring the measurement target (S25).
  • the edge of the segmented mask of the measurement target with consistent results is fitted to the target, and the equations such as a straight line, a circle or an ellipse are fitted, and the measurement result is obtained by calculating the foregoing equations.
  • Target fitting is performed on the edge of the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image, which can reduce the fitting error of the target fitting and improve the accuracy of the measurement result.
  • This embodiment provides a method for automatically measuring an anatomical structure based on the user designating a measurement item to be measured.
  • the acquired ultrasound image of the anatomical structure automatically recognizes the identification image containing the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement item in the identification image.
  • the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image.
  • the measurement simplifies the operation process and improves the efficiency of measuring anatomical structures.
  • an automatic measurement method of an anatomical structure provides an exemplary introduction to a method for automatic measurement of an anatomical structure based on a user designating a measurement item to be measured.
  • This embodiment provides a method for measuring all the measurability of the anatomical structure based on the user not specifying the measurement item to be measured. The user does not need to specify.
  • the ultrasound image of the anatomical structure is obtained, the ultrasound image is directly automatically recognized to identify the recognition image containing the measurement target, and the measurement target in the recognition image is automatically measured, where the measurement target corresponds to The measurement items of the anatomical structure.
  • the user only does not need to perform any operation on the measurement items, which further simplifies the operation process and improves the efficiency of measuring the anatomical structure.
  • FIG. 5 there is shown a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application.
  • the automatic measurement method of anatomical structure is used to automatically measure the anatomical structure of the biological tissue to be tested after processing the ultrasonic echo, as shown in FIG. 5.
  • the method includes:
  • Step S31 an image acquisition step, acquiring at least two ultrasound images, at least one of the ultrasound images being related to at least one anatomical structure of the biological tissue.
  • the automatic measurement method of anatomical structure provided in this embodiment is used to identify an ultrasonic image containing a measurement target from at least two ultrasonic images related to the anatomical structure of a biological tissue, and measure the measurement target in the ultrasonic image, wherein the measurement The target corresponds to the measurement items possessed by the anatomical structure.
  • the measurement items of the anatomical structure are measured as the measurement items to be measured.
  • the measurement items to be measured are often specified by the user, and the user manually recognizes the ultrasound image containing the measurement target corresponding to the measurement item to be measured and then performs the measurement. This process increases the need for the user to specify the measurement item operation; because the measurement item that the user wants to measure is often fixed, it is necessary to measure all the measurement items of the anatomical structure.
  • the anatomy is not specified based on the user. All measurement items of the structure are measured, which reduces the user's operation steps and simplifies the measurement operation of the anatomical structure.
  • Step S32 an image recognition step, recognizing a recognition image containing a measurement target corresponding to a measurement item from the ultrasound image, wherein the recognition image is at least one of the ultrasound images. Since this embodiment measures all the measurement items that the anatomical structure has, the measurement targets in the identification image of the measurement target corresponding to the measurement item that contains the anatomical structure identified from the ultrasound image need to be measured, that is, the measurement of the anatomical structure All items are to-be-tested measurement items.
  • the at least one anatomical structure of the biological tissue has at least one feature measurement item
  • the image recognition step (S32) includes: combining the image feature of the ultrasound image with a preset database containing the at least one feature.
  • the image feature of the database image of the measurement target corresponding to any one of the feature measurement items is compared to determine whether the image feature of the ultrasound image matches the image feature of the database image, and when the image feature of the ultrasound image matches the image feature of the database When the image features of the image are matched, it is determined that the ultrasound image is the recognition image, the measurement item corresponding to the measurement target contained in the recognition image is the measurement item to be measured, and the measurement target corresponds to the recognition image The image feature matches the measurement target contained in the database image.
  • the image acquisition step (S31) acquires each of the two ultrasound images and the database image containing the liver size in the preset database (also about the liver).
  • the ultrasound image of the liver) is compared to determine whether the image feature of the liver ultrasound image matches the image feature of the database image. If it matches, it will be determined as a recognition image. If it does not match, it will be discarded; the matching recognition image contains The liver of is the measurement target, and the characteristic measurement item corresponding to the size of the liver is used as the measurement item to be measured.
  • any one or two of the at least one anatomical structure of the biological tissue has at least two characteristic measurement items
  • the identification image contains at least two measurement targets corresponding to the at least two measurement items to be measured, respectively
  • the image recognition step (S32) includes: classifying the at least two measurement targets in the recognition image, so that each of the at least two measurement targets respectively corresponds to the at least two feature measurement One of the items.
  • the image recognition step (S32) recognizes the recognition image containing multiple measurement targets, it is necessary to The measurement targets contained in the recognition image are classified, that is, the measurement items contained in the recognition image are classified according to the at least two measurement items of the anatomical structure, and one-to-one correspondence with at least two measurement items of the anatomical structure is obtained. Measurement target.
  • two or more ultrasound images about the fetus are acquired in the image acquisition step (S31).
  • the fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femur length.
  • Each of the acquired ultrasound images of two or more fetuses of the fetus is respectively a database containing measurement targets corresponding to at least one of double parietal diameter, head circumference, abdominal circumference, and femoral length in the preset database
  • the images (also about the ultrasound images of the fetus) are compared to determine the recognition image.
  • the double parietal diameter and head circumference correspond to the measurement target (the double parietal diameter corresponds to this measurement target of the parietal area on both sides of the fetal head
  • the measurement target from the occipital bone to the forehead nasal root area corresponding to the head circumference often appears in the same ultrasound image, and the identification of the measurement target corresponding to the two measurement items including double parietal diameter and head circumference is determined. After the image is imaged, it is necessary to further distinguish the measurement targets corresponding to the two measurement items to be measured in the recognition image to determine which measurement target corresponds to the head circumference and which measurement target corresponds to the double parietal diameter in the recognition image.
  • a machine learning algorithm is used to learn the image features of the database images in the preset database that can distinguish different measurement items.
  • the machine learning method is used to extract the image features of the ultrasound image acquired in the image acquisition step (S31), and The learned image feature of the database image is matched with the image feature of the ultrasound image, and the ultrasound image matching the learned image feature is obtained as the recognition image.
  • the measurement targets are classified according to the learned image features that can distinguish different measurement items, so as to realize the recognition of the measurement targets in the recognition image according to the measurement items possessed by the anatomical structure.
  • the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
  • PCA Principal component analysis
  • LDA Linear Discriminant Analysis
  • Harr feature extraction method texture feature extraction method, etc.
  • the classification discriminators used include but are not limited to nearest neighbor classification (K nearest neighbor, KNN), support Vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
  • a deep learning method is used to construct a stacked convolutional layer and a fully connected layer to learn image features of database images in a preset database, and learn image features that can distinguish different measurement targets.
  • Feature recognition is the recognition image in ultrasound images.
  • the measurement targets in the recognition image are classified according to the learned image features.
  • deep learning methods include, but are not limited to, VGG network, ResNet residual network, Inception module, AlexNe ot deep network, etc.
  • Step S25 an automatic measurement step, measuring the measurement target in the recognition image.
  • the measurement target in the recognition image is measured to obtain the measurement result of the anatomical structure.
  • the edge detection is directly performed on the measurement target in the recognition image. The method of performing target fitting to obtain the target fitting equation is used for automatic measurement.
  • the automatic measurement step (S25) includes:
  • the contour corresponding to the measurement target is extracted by an edge detection algorithm.
  • Edge detection algorithms include, but are not limited to: using Sobel operator, canny operator, etc. to detect the contour of the measurement target based on the pixel points and gray-scale weighted values of the ultrasound image.
  • Fitting the contour corresponding to the measurement target to obtain a fitting equation corresponding to the measurement target Detecting algorithms such as lines, circles, ellipses, etc. are used to fit the contour of the measurement item to be measured to obtain a fitting equation.
  • the fitting algorithms include but are not limited to least squares estimation, Hough transform, Randon transform, Ransac and other algorithms.
  • the measurement result of the measurement target is determined by the fitting equation.
  • the measurement result is determined according to the fitting equation obtained by the fitting algorithm in the above steps. If the fitting equation circle or ellipse equation is obtained in the above steps, it is the result of automatic measurement. If the fitting equation obtained in the above steps is a straight line, the end point can be further located in conjunction with the gray change of the end point of the measurement target to realize automatic measurement. Taking the measurement of femur length in obstetric ultrasound testing as an example, the femur appears as a bright linear structure. After detecting the straight line where the femur is located, two points with the largest gray gradient of the femur can be detected on the straight line as the two end points of the femur.
  • the method for automatically measuring anatomical structures performed by the processor of this embodiment provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating the measurement item to be measured. .
  • This embodiment provides a method for measuring all the measurability of the anatomical structure based on the user not specifying the measurement item to be measured. The user does not need to specify.
  • the ultrasound image of the anatomical structure is obtained, the ultrasound image is automatically recognized to identify the recognition image containing the measurement target, and the measurement target in the recognition image is automatically measured, where the measurement target corresponds to The measurement items of the anatomical structure.
  • the user does not need to perform any operation on the measurement items, which further simplifies the operation process and improves the efficiency of measuring the anatomical structure.
  • a positioning step is added after the image recognition step to eliminate the influence of the surrounding structure of the measurement target on the measurement result in the measurement step.
  • FIG. 6 there is shown a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application.
  • the image recognition step (S41) and the image recognition step (S42) are consistent with the image recognition step (S31) and the image recognition step (S32) shown in FIG. 5.
  • the difference is that after the image recognition step (S42) Add a positioning step (S43), and then perform an automatic measurement step (S44).
  • Step S43 a positioning step, positioning the measurement target in the recognition image.
  • the image recognition step (S42) only the recognition image containing the measurement target is obtained, and the position of the measurement target in the actual measurement is not known. Directly measuring the measurement items in the recognition image requires detecting the entire image, and the obtained edge detection results are easily affected by the structure around the measurement items. For this reason, in the positioning step (S43), the measurement target in the recognition image is located, and then the measurement target is fitted to the measurement target in the automatic measurement step, which can reduce the influence of the structure around the measurement target and make the measurement result more accurate .
  • the image feature of the recognition image is compared and analyzed with the image feature of the database image corresponding to the measurement item to be measured in the preset database, so as to compare all the features of the image.
  • the measurement target in the recognition image is positioned, wherein the database image contains a calibration result corresponding to the target to be measured, and the measurement target is an area consistent with the calibration result.
  • the calibration result includes the ROI box of the measurement target.
  • the calibration result includes the ROI box of the measurement target corresponding to the measurement item to be measured.
  • the positioning step includes: extracting the image features in the sliding window by using a method based on the sliding window, comparing the image features in the sliding window with the image features of the calibration result, and judging the difference between the image features in the sliding window and Whether the image feature of the calibration result matches, and when the image feature in the sliding window matches the image feature of the calibration result, it is determined that the current sliding window is the measurement target.
  • a machine learning algorithm is used to learn the image features in the ROI frame of the calibration result of the database image in the preset database, wherein the learned image feature in the ROI frame is the measurement The image feature of the calibration result that distinguishes the ROI area and non-ROI area of the target.
  • a machine learning algorithm is used to extract the image features in the sliding window obtained when the sliding window traversal is performed on the recognition image recognized in the image recognition step (S13).
  • the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
  • Principal component analysis Principal Components Analysis, PCA
  • linear discriminant analysis Linear Discriminant Analysis, LDA
  • Harr feature extraction method texture feature extraction method, etc.
  • the calibration result includes the ROI frame of the measurement target corresponding to the measurement item to be measured
  • the positioning step includes: according to the database containing the measurement target corresponding to the measurement item to be measured
  • frame regression processing is performed on the recognition image to obtain a frame area, and the frame area is the measurement target.
  • a deep learning method is used to construct a stacked base convolutional layer and a fully connected layer to learn and parameterize the image features in the ROI frame of the calibration result of the database image corresponding to the measurement item to be measured in the preset database
  • the learned image feature in the ROI frame is the image feature of the calibration result that distinguishes the ROI area of the measurement target from the non-ROI area.
  • the neural network algorithm directly returns the border area of interest in the recognized image, and this border area is the measurement target to be measured.
  • neural network algorithms include but are not limited to R-CNN, Fast R-CNN, Faster-RCNN, SSD, YOLO and other target detection algorithms.
  • the calibration result includes a mask for accurately segmenting the measurement target corresponding to the measurement item
  • the positioning step includes: according to the measurement target corresponding to the measurement item to be measured
  • a semantic segmentation algorithm is used to identify the segmentation mask of the measurement target in the recognition image that is consistent with the calibration result.
  • an end-to-end semantic segmentation network method based on a deep learning method is used to perform network segmentation on the recognition image. Specifically, construct a stacked base convolution layer or use a deconvolution layer to sample the mask for precise network segmentation of the calibration area corresponding to the measurement target in the preset database, and directly obtain the contained image from the recognition image according to the sampling result.
  • Semantic segmentation networks used include, but are not limited to, Fully Convolutional Networks (FCN), U-Net Convolutional Networks (U-Net Convolutional Networks), etc.
  • the recognition image obtained in the image recognition step (S42) includes at least two measurement targets corresponding to the measurement items to be measured
  • the positioning step (S43) further includes: Classification is performed so that the measurement target corresponds to the at least two measurement items to be measured in a one-to-one correspondence. Since the recognition image obtained in the image recognition step (S42) includes at least two measurement targets corresponding to the measurement items to be measured, it is not only necessary to locate the position of the measurement target in the recognition image during the process of locating the measurement items. It also distinguishes the measurement target category to which each location belongs, and obtains the measurement item corresponding to the measurement result after the measurement target is automatically measured in the subsequent automatic measurement step.
  • the recognition image of the fetus recognized in the image recognition step (S42) also contains the measurement target of the occipital bone of the fetal head corresponding to the head circumference and the fetal head corresponding to the double parietal diameter.
  • the two measurement targets of the parietal area on both sides of the fetus During the measurement process after positioning, it is not known whether the measurement is head circumference or double parietal diameter. It is necessary to measure the occipital bone of the fetal head to the forehead nasal root area. Distinguish this measurement target from the parietal area on both sides of the fetal head, and obtain measurement targets corresponding to the head circumference and double parietal diameter respectively.
  • the image feature of the measurement target is compared with the image feature of the database image that is calibrated for the measurement target corresponding to the measurement item to be measured in a preset database, to Determine whether the image feature of the measurement target matches the image feature of the calibration result, and when the image feature of the measurement target matches the image feature of the calibration result, it is determined that the measurement target corresponds to the to-be-measured Measurement items.
  • the image of each measurement target determined in the recognition image of the fetus recognized in the positioning step (S43) The feature is compared with the image feature of the calibration result of the head circumference calibration that contains the database image of the head circumference in the preset database.
  • the image feature of the measurement target matches the image feature of the calibration result of the head circumference calibration, the current is judged
  • the measurement target for comparison is the head circumference.
  • the machine learning algorithm is collected to learn the image features of the database image in the preset database to distinguish different calibration areas, and at the same time, the machine learning method is used to extract the image of the measurement target in the identification image positioned in the positioning step (S43) Feature: Match the learned image feature of the database image with the image feature of the measurement target, and obtain the measurement target matching the learned image feature corresponding to the measurement target of the current calibration area.
  • the measurement targets are classified according to the learned image features that can distinguish different calibration areas, so that the measurement items in the recognition image will be recognized according to the measurement items of the anatomical structure Items are recognized.
  • the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
  • PCA Principal component analysis
  • LDA Linear Discriminant Analysis
  • Harr feature extraction method texture feature extraction method
  • texture feature extraction method etc.
  • the classification discriminator used includes but not limited to the nearest neighbor classification (K nearest neighbor, KNN), support Vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
  • the step of measuring a measurement target includes performing target fitting on the measurement target to obtain a fitting equation of the measurement target; and determining the measurement result of the measurement target through the fitting equation.
  • the calibration result in the positioning step (S43) includes the ROI frame of the measurement target corresponding to the measurement item to be measured.
  • the automatic measurement step (S44) measures the measurement target in the process of measuring the measurement target.
  • Target fitting is performed on the measurement target within the frame area of the measurement target to obtain the fitted line, circle, or ellipse equation, and the measurement result is obtained by calculating the foregoing equation.
  • Performing target fitting on the measurement target in the frame area of the measurement target can reduce the interference of other structures outside the frame area on the target fitting, and improve the accuracy of the measurement result.
  • the calibration result in the positioning step (S43) includes a mask for accurately segmenting the measurement target corresponding to the measurement item.
  • the measurement target is measured.
  • Target fitting is performed on the edge of the segmentation mask of the measurement target consistent with the calibration result in the recognition image, and the equations such as straight line, circle or ellipse are fitted, and the measurement result is obtained by calculating the foregoing equation.
  • Target fitting is performed on the edge of the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image, which can reduce the fitting error of the target fitting and improve the accuracy of the measurement result.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the various component embodiments of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor may be used in practice to implement some or all of the functions of some modules according to the embodiments of the present application.
  • This application can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Gynecology & Obstetrics (AREA)
  • Pregnancy & Childbirth (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

L'invention concerne un procédé de mesure automatique et un système de test par ultrasons pour une structure anatomique. Le procédé de mesure automatique comprend : une étape d'acquisition d'image (11), impliquant : l'acquisition d'une image ultrasonore, l'image ultrasonore étant associée à au moins une structure anatomique d'un tissu corporel vivant, et la ou les structures anatomiques ayant au moins un élément de mesure ; une étape d'acquisition d'élément de mesure (12), impliquant : l'acquisition d'un élément de mesure à mesurer ; une étape de reconnaissance d'image (13), consistant à : reconnaître, à partir de l'image ultrasonore, une image de reconnaissance contenant une cible de mesure correspondant audit élément de mesure ; et une étape de mesure automatique (14), impliquant : la mesure de la cible de mesure dans l'image de reconnaissance. Une image ultrasonore contenant une cible de mesure correspondant à un élément de mesure à mesurer est reconnue en fonction dudit élément de mesure devant être mesuré, et la cible de mesure dans l'image ultrasonore est mesurée automatiquement, de telle sorte que l'efficacité de mesure d'une structure anatomique est améliorée.
PCT/CN2019/126388 2019-12-18 2019-12-18 Procédé de mesure automatique et système d'imagerie ultrasonore pour structure anatomique WO2021120065A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/126388 WO2021120065A1 (fr) 2019-12-18 2019-12-18 Procédé de mesure automatique et système d'imagerie ultrasonore pour structure anatomique
CN202011506495.7A CN112998755A (zh) 2019-12-18 2020-12-18 解剖结构的自动测量方法和超声成像系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/126388 WO2021120065A1 (fr) 2019-12-18 2019-12-18 Procédé de mesure automatique et système d'imagerie ultrasonore pour structure anatomique

Publications (1)

Publication Number Publication Date
WO2021120065A1 true WO2021120065A1 (fr) 2021-06-24

Family

ID=76383499

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126388 WO2021120065A1 (fr) 2019-12-18 2019-12-18 Procédé de mesure automatique et système d'imagerie ultrasonore pour structure anatomique

Country Status (2)

Country Link
CN (1) CN112998755A (fr)
WO (1) WO2021120065A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113974688B (zh) * 2021-09-18 2024-04-16 深圳迈瑞生物医疗电子股份有限公司 超声成像方法和超声成像系统
CN114376614B (zh) * 2021-11-08 2024-03-12 中国医科大学附属第一医院 颈动脉超声测量的辅助方法及超声设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150173650A1 (en) * 2013-12-20 2015-06-25 Samsung Medison Co., Ltd. Method and apparatus for indicating point whose location has been adjusted based on type of caliper in medical image
CN105555198A (zh) * 2014-03-20 2016-05-04 深圳迈瑞生物医疗电子股份有限公司 自动识别测量项的方法、装置及一种超声成像设备
WO2018129737A1 (fr) * 2017-01-16 2018-07-19 深圳迈瑞生物医疗电子股份有限公司 Procédé de mesure de paramètres dans une image ultrasonore et système d'imagerie ultrasonore
CN109044398A (zh) * 2018-06-07 2018-12-21 深圳华声医疗技术股份有限公司 超声系统成像方法、装置及计算机可读存储介质
CN109276275A (zh) * 2018-10-26 2019-01-29 深圳开立生物医疗科技股份有限公司 一种超声图像标准切面提取及测量方法和超声诊断设备
CN109589140A (zh) * 2018-12-26 2019-04-09 深圳开立生物医疗科技股份有限公司 一种超声测量多项目处理方法和超声诊断系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150173650A1 (en) * 2013-12-20 2015-06-25 Samsung Medison Co., Ltd. Method and apparatus for indicating point whose location has been adjusted based on type of caliper in medical image
CN105555198A (zh) * 2014-03-20 2016-05-04 深圳迈瑞生物医疗电子股份有限公司 自动识别测量项的方法、装置及一种超声成像设备
WO2018129737A1 (fr) * 2017-01-16 2018-07-19 深圳迈瑞生物医疗电子股份有限公司 Procédé de mesure de paramètres dans une image ultrasonore et système d'imagerie ultrasonore
CN109044398A (zh) * 2018-06-07 2018-12-21 深圳华声医疗技术股份有限公司 超声系统成像方法、装置及计算机可读存储介质
CN109276275A (zh) * 2018-10-26 2019-01-29 深圳开立生物医疗科技股份有限公司 一种超声图像标准切面提取及测量方法和超声诊断设备
CN109589140A (zh) * 2018-12-26 2019-04-09 深圳开立生物医疗科技股份有限公司 一种超声测量多项目处理方法和超声诊断系统

Also Published As

Publication number Publication date
CN112998755A (zh) 2021-06-22

Similar Documents

Publication Publication Date Title
JP6467041B2 (ja) 超音波診断装置、及び画像処理方法
CN110811691B (zh) 自动识别测量项的方法、装置及一种超声成像设备
US11229419B2 (en) Method for processing 3D image data and 3D ultrasonic imaging method and system
CN111629670B (zh) 用于超声系统的回波窗口伪影分类和视觉指示符
EP2298176A1 (fr) Dispositif de traitement d'image médicale et procédé de traitement d'image médicale
JP2017525445A (ja) 超音波撮像装置
WO2020133510A1 (fr) Procédé et dispositif d'imagerie ultrasonore
CN111374712B (zh) 一种超声成像方法及超声成像设备
US20160000401A1 (en) Method and systems for adjusting an imaging protocol
JP2010512218A (ja) 医用イメージングシステム
JP2016195764A (ja) 医用画像処理装置およびプログラム
WO2021120065A1 (fr) Procédé de mesure automatique et système d'imagerie ultrasonore pour structure anatomique
CN110604592A (zh) 一种髋关节的成像方法以及髋关节成像系统
CN110613482A (zh) 一种髋关节的成像方法以及髋关节成像系统
CN107767386B (zh) 超声图像处理方法及装置
US20220249060A1 (en) Method for processing 3d image data and 3d ultrasonic imaging method and system
CN110604594A (zh) 一种髋关节的成像方法以及髋关节成像系统
KR101144867B1 (ko) 인체 내 오브젝트를 스캔하는 3차원 초음파 검사기 및 3차원 초음파 검사기 동작 방법
CN113017695A (zh) 超声成像方法、系统及计算机可读存储介质
US20230326017A1 (en) System and method for automatically measuring spinal parameters
EP4062838A1 (fr) Procédé pour une utilisation dans l'imagerie à ultrasons
Khazendar Computer-aided diagnosis of gynaecological abnormality using B-mode ultrasound images
CN116762093A (zh) 超声检测方法和超声成像系统
CN115644921A (zh) 一种自动弹性测量方法
CN116138807A (zh) 一种超声成像设备及腹主动脉的超声检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19956360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19956360

Country of ref document: EP

Kind code of ref document: A1