WO2021120065A1 - Automatic measurement method and ultrasonic imaging system for anatomical structure - Google Patents

Automatic measurement method and ultrasonic imaging system for anatomical structure Download PDF

Info

Publication number
WO2021120065A1
WO2021120065A1 PCT/CN2019/126388 CN2019126388W WO2021120065A1 WO 2021120065 A1 WO2021120065 A1 WO 2021120065A1 CN 2019126388 W CN2019126388 W CN 2019126388W WO 2021120065 A1 WO2021120065 A1 WO 2021120065A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
measurement
recognition
measurement target
ultrasound
Prior art date
Application number
PCT/CN2019/126388
Other languages
French (fr)
Chinese (zh)
Inventor
邹耀贤
林穆清
王泽兵
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to PCT/CN2019/126388 priority Critical patent/WO2021120065A1/en
Priority to CN202011506495.7A priority patent/CN112998755A/en
Publication of WO2021120065A1 publication Critical patent/WO2021120065A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0858Detecting organic movements or changes, e.g. tumours, cysts, swellings involving measuring tissue layers, e.g. skin, interfaces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4444Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data

Definitions

  • This application relates to the field of medical equipment, and more specifically to an automatic measurement method of anatomical structure and an ultrasound imaging system.
  • Ultrasound measurement is a common method to obtain the size of tissues or lesions.
  • many ultrasound manufacturers have integrated automatic measurement algorithms. For example, for obstetric measurement, many manufacturers support head circumference, double parietal diameter, abdominal circumference, The automatic measurement of commonly used measurement items such as femoral length has made a great contribution to the improvement of clinical examination efficiency.
  • multi-window mode is a commonly used image display method.
  • multiple ultrasound images mainly dual-windows
  • the active window there will be a switch button that allows the user to activate a certain window (hereinafter referred to as the active window), and the image will be scanned in this window in real time, and the remaining windows will display the scanned image.
  • the multi-window mode often brings great troubles.
  • the system does not know which window image the user wants to measure.
  • the method is that in the multi-window mode, which window is the current active window, the image of which window is automatically measured. This requires the user to scan an image and immediately perform the measurement, otherwise the automatically measured image is not the image that the doctor wants.
  • some doctors are accustomed to lay out all the slices and then perform unified measurements. This requires the user to switch windows to measure, but increases the operation steps.
  • the present application provides an automatic measurement method and ultrasound imaging system for anatomical structures, which can obtain an ultrasound image with the measurement item to be measured according to the measurement item to be measured, and perform automatic measurement on the measurement item to be measured in the ultrasound image, Improve the efficiency of automatic measurement of anatomical structures.
  • an embodiment of the present application provides an automatic measurement method of an anatomical structure, including:
  • an ultrasound image is acquired, the ultrasound image is related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item;
  • the measurement item obtaining step is to obtain the measurement item to be measured of the anatomical structure
  • a positioning step positioning the measurement target in the recognition image
  • the measurement target is measured.
  • an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
  • an ultrasound image is acquired, the ultrasound image is related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item;
  • the measurement item obtaining step is to obtain the measurement item to be measured of the anatomical structure
  • the measurement target in the recognition image is measured.
  • an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
  • a recognition image containing a measurement target corresponding to a measurement item is recognized from the ultrasound image, wherein the measurement item is a measurement item to be measured, and the recognition image is at least one image in the ultrasound image ;
  • the positioning step is to locate the measurement target in the recognition image.
  • the measurement target is measured.
  • an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
  • a recognition image containing a measurement target corresponding to a measurement item is recognized from the ultrasound image, wherein the measurement item is a measurement item to be measured, and the recognition image is at least one image in the ultrasound image ;
  • the measurement target in the recognition image is measured.
  • An embodiment of the present application also provides an ultrasound imaging system, including:
  • Ultrasound probe used to transmit ultrasonic waves to biological tissues and receive ultrasonic echoes to obtain ultrasonic echo signals
  • a processor configured to process the ultrasonic echo signal to obtain an ultrasonic image of the biological tissue
  • the memory is used to store executable program instructions
  • a processor configured to execute the executable program instructions, so that the processor executes the automatic measurement method described in any one of the first aspect to the fourth aspect.
  • the embodiments of the present application provide an automatic measurement method of anatomical structure and an ultrasound imaging system. According to the measurement item to be measured, the ultrasound image containing the measurement item to be measured is identified, and the measurement item to be measured in the ultrasound image is identified. Automatic measurement improves the efficiency of measuring anatomical structures.
  • Fig. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application
  • FIG. 2 shows a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application
  • Fig. 3 shows a schematic flowchart of an automatic measurement step in an automatic measurement method of an anatomical structure according to an embodiment of the present application
  • Fig. 4 shows a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application
  • Fig. 5 shows a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application
  • Fig. 6 shows a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application.
  • Fig. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application.
  • the ultrasound imaging system 100 provided in this embodiment includes an ultrasound probe 101, a processor 102, a memory 103 and a display 104.
  • the ultrasonic probe 101 is used to transmit ultrasonic waves to biological tissues and receive ultrasonic echoes to obtain ultrasonic echo signals.
  • the processor 102 is configured to process the ultrasound echo signals to obtain an ultrasound image of the anatomical structure of the biological tissue, and automatically measure the anatomical structure of the biological tissue based on the ultrasound image.
  • the memory 103 stores executable computer program instructions.
  • the processor 102 executes the executable computer program instructions, the processor 102 performs automatic measurement of the anatomical structure to obtain a measurement result, for example, a measurement result of a measurement target.
  • the display 104 is used to display the ultrasound image, the measurement result measured by the processor, the measurement target, the measurement item to be measured, and the like.
  • the method for automatically measuring anatomical structures performed by the processor of this embodiment provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating the measurement item to be measured.
  • the acquired ultrasound image of the anatomical structure automatically recognizes the recognition image with the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement target in the recognition image.
  • the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image.
  • the measurement simplifies the operation process and improves the efficiency of measuring anatomical structures.
  • FIG. 2 a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application is shown.
  • the automatic measurement method of anatomical structure is used to automatically measure the anatomical structure of biological tissue after processing the ultrasonic echo, as shown in FIG. 2.
  • the method includes:
  • Step S11 an image acquisition step, acquiring an ultrasound image, the ultrasound image being related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item.
  • the anatomical structure has at least one measurement item.
  • the measurement items such as double parietal diameter, head circumference, abdominal circumference, and femoral length that need to be measured; during abdominal ultrasound examination, the anatomical structure of the liver and kidney of the subject’s abdomen is observed. They correspond to the measurement items of liver size and kidney size respectively.
  • the anatomical structure of the corresponding biological tissue has different measurement items.
  • the processor 102 processes the ultrasonic echo acquired by the ultrasonic probe 101 to generate an ultrasonic image. In one embodiment, the processor 102 processes the ultrasonic echo acquired by the ultrasonic probe 101, generates an ultrasonic image to be stored in the memory 103, and obtains information from the ultrasonic image that has been stored in the memory 103 during the image acquisition step.
  • An ultrasound image of the anatomical structure of the biological body, and the ultrasound image and the processing result are displayed on the display 104.
  • at least one ultrasound image about the anatomical structure is acquired.
  • the display 104 displays one or more ultrasound images about the anatomical structure of the biological tissue.
  • ultrasound images about the head of the fetus are displayed on the display 104 at the same time.
  • ultrasound images related to different anatomical structures of the biological tissue are displayed on the display 104, for example, ultrasound images related to the head of the fetus and the abdomen of the fetus are simultaneously displayed on the display 104.
  • the display 104 has multiple display windows, and each display window displays one or more ultrasound images about the anatomy. For example, there are two display windows on the display 104, one of which displays information about the fetus. Ultrasound image of the head, one of the display windows shows the ultrasound image of the fetus' abdomen.
  • measurement items and measurement results related to the anatomical structure may also be displayed.
  • Step S12 a measurement item obtaining step, obtaining a measurement item to be tested.
  • the measurement item acquisition step (S12) the measurement items that need to be measured regarding the anatomical structure of the biological tissue are acquired.
  • the measurement items of each of at least one anatomical structure of the biological tissue are displayed on the display 104.
  • the processor 102 performs the measurement item acquisition step by receiving the measurement item to be measured input by the user.
  • the processor 102 is connected to an input device, and the user selects the measurement item to be measured from the display 104 through an instruction input by the input device.
  • the display 104 displays the measurement items of the fetal head, fetal abdomen, and placenta during the process of scanning the pregnant woman's abdomen, including: biparietal diameter, occipital frontal diameter, and head circumference , Abdominal circumference, femur length, humerus length, placenta thickness, abdominal transverse diameter, abdominal thickness diameter, neck folds and other measurement items.
  • the user needs to measure the length of the femur of the fetus, and input the instruction to measure the length of the femur through the input device communicatively connected with the processor 102.
  • the processor 102 receives the instruction that the user needs to measure the length of the femur, thereby obtaining the measurement item to be measured.
  • the measurement item is the length of the femur.
  • the anatomical structure has a measurement item, which is acquired as the measurement item to be measured in the measurement item acquisition step (S12).
  • the anatomical structure has two or more measurement items, and at least one of the two or more measurement items can be separately obtained as the measurement to be measured in the measurement item obtaining step (S12). Item, two or more of the two or more measurement items may also be used as the measurement items to be measured at the same time. For example, during the obstetric ultrasound examination, the ultrasound image about the fetus acquired in the image acquisition step, the fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femur length.
  • the user The dual parietal diameter is measured by inputting the input device in communication with the processor 102 to obtain the dual parietal diameter to be measured, or the user can simultaneously measure the dual parietal diameter, head circumference, and femoral length through the input in communication with the processor 102.
  • One instruction so as to obtain the three measurement items of double parietal diameter, head circumference, and femur length at the same time.
  • Step S13 an image recognition step, recognizing a recognition image containing a measurement target corresponding to the measurement item to be measured from the ultrasound image, and the recognition image is at least one of the ultrasound images.
  • the ultrasound image about the anatomical structure acquired in the image acquisition step is identified, whether it contains the measurement target corresponding to the measurement item specified by the user, and the measurement item specified by the user is included
  • the corresponding measurement target is the recognition image, and the ultrasonic image that does not contain the measurement target corresponding to the measurement item designated by the user is discarded, and the following automatic measurement step is not entered.
  • the measurement item of is used as the measurement item to be measured for measurement, wherein the ultrasound image may be a cross-sectional image or a three-dimensional image.
  • the doctor first lays out the cut surfaces of all tissues and then performs unified measurement, or displays the cut surfaces of the tissues in different windows during the inspection process. Take automatic measurements.
  • the user is required to manually identify the ultrasound image containing the measurement target corresponding to the measurement item to be measured and then perform the measurement. For example, manually switch the window or select the active window to perform the measurement.
  • This process increases user operations.
  • the image recognition step (S13) automatically recognizes the recognition image containing the measurement target corresponding to the measurement item to be measured specified by the user, without requiring the user to manually recognize the ultrasonic image containing the measurement target corresponding to the measurement item to be measured , Automatically measure the automatically recognized image, reduce user operations and improve measurement efficiency.
  • the ultrasound image about the fetus is acquired.
  • the fetus has three measurement items to be measured: double parietal diameter, head circumference, and femoral length.
  • the double parietal diameter corresponds to the fetal head
  • the measurement target of the parietal bones on both sides of the fetus and the head circumference correspond to the measurement target of the occipital bone of the fetal head to the forehead nasal root
  • the length of the femur corresponds to the measurement target of the fetal femur.
  • the acquisition user needs to measure the fetal femur length, and the ultrasound image about the fetus contains the ultrasound image including the head and the ultrasound image including the femur, which is required in the image recognition step
  • the ultrasonic image including the head and the ultrasonic image including the femur are recognized to identify the ultrasonic image including the femur, and the subsequent automatic measurement step is performed on the recognized ultrasonic image including the femur.
  • the recognition image is a part of an ultrasound image.
  • it may be a partial area of a certain ultrasound image.
  • the ultrasound image about the fetal head is acquired.
  • the fetal head has two measurement items to be measured: double parietal diameter and head circumference.
  • the double parietal diameter corresponds to the fetal head
  • the measurement target of the parietal bones on both sides and the head circumference correspond to the measurement target of the occipital bone of the fetus head to the root of the forehead and nose.
  • the measurement item of double parietal diameter is obtained in the measurement item acquisition step, and the ultrasonic image recognized in the image recognition step Only the area of the parietal bones on both sides of the fetus’s head is included.
  • one or more measurement items to be measured are obtained in the measurement item acquisition step (S12), and multiple recognition images are recognized in the image recognition step (S13), wherein at least two recognition images contain the same measurement item to be tested.
  • the measurement target corresponding to the measurement item in the subsequent automatic measurement step, the measurement target corresponding to the measurement item to be measured in each recognition image is measured, and the measurement target in each recognition image is measured. The results are averaged, and the average is the measurement result for the measurement item to be tested.
  • the image acquisition step multiple ultrasound images about the fetus are acquired.
  • the user needs to measure the fetal femur length.
  • the recognition step two ultrasound images including the femur are identified from multiple ultrasound images, and both of the ultrasound images including the femur are recognized images, and then the recognized ultrasound images including the femur are automatically measured.
  • the two identification images are measured, and the measurement results of the two identification images are averaged, and the average value is the measurement result for the measurement item to be measured.
  • the final measurement result may also be determined by weighting or calculating the variance, which is not specifically limited here.
  • At least two ultrasound images related to the anatomical structure of the biological tissue are acquired in the image acquisition step (S11); the measurement item acquisition step (S12) acquires one or more measurement items of the aforementioned anatomical structure; the image recognition step In (S13), the ultrasound images are classified according to the measurement items of each of the at least one anatomical structure of the biological tissue, so as to obtain the ultrasound image containing the measurement target corresponding to the measurement item; Select the ultrasound image with the measurement target corresponding to the measurement item to be measured from the ultrasound images.
  • the foregoing step of classifying the ultrasound image includes: comparing the image feature of the ultrasound image with the image feature of the database image in a preset database, wherein the database image contains at least one of the The measurement target corresponding to the measurement item of at least one of the anatomical structures of the biological tissue, when the image feature of the ultrasound image matches the image feature of the database image, the ultrasound image contains the The measurement target contained in the database image.
  • the image acquisition step (S11) For example, in obstetric ultrasound inspection, multiple ultrasound images are acquired in the image acquisition step (S11), including ultrasound images of the fetus head and ultrasound images of the fetus abdomen.
  • the measurement items of the fetal head include: double parietal diameter, head circumference, etc.
  • the measurement items of the fetal abdomen include: abdominal circumference, abdominal transverse diameter, and abdominal thickness diameter.
  • the measurement item to be measured obtained in the measurement item acquisition step (S12) is the head circumference.
  • the image acquisition step (S11) is based on the measurement items on the head of the fetus and the measurement items on the abdomen of the fetus.
  • the acquired multiple ultrasound images are classified, and the measurement target containing the parietal regions on both sides of the fetal head corresponding to the double parietal diameter and the measurement target of the head circumference corresponding to the occipital bone of the fetal head to the forehead nasal root region are obtained.
  • an ultrasound image containing the measurement target of the abdomen corresponding to the abdominal circumference, the transverse diameter of the abdomen, and the thickness of the abdomen is compared with the image features of the database image in the preset database.
  • the database image contains at least the measurement targets (such as the parietal bones on both sides of the head, the occipital bones to the forehead, nose, and abdomen).
  • the ultrasound image corresponds to the measurement target contained in the database image.
  • the measurement item to be measured acquired in the measurement item acquisition step (S12) is directly used as the head circumference, and from the classified ultrasound images Obtaining an ultrasound image from the occipital bone of the head to the root of the forehead and nose is the recognition image.
  • the multiple ultrasound images acquired in the image acquisition step (S11) are classified in the image recognition step (S13) on the display 104 and the ultrasound images containing measurement targets corresponding to different measurement items are displayed.
  • the recognition image recognized in the image recognition step (S13) is displayed on the display 104.
  • the processor 102 is connected to the input device, the user selects the measurement item to be measured from the display 104 through the instruction input by the input device, and the processor 102 obtains the image acquisition step (S11) according to the measurement item to be measured selected by the user.
  • the recognized recognition images are displayed on the display 104.
  • multiple ultrasound images are displayed on the display 104, and at the same time, the recognition images are displayed in a manner that distinguishes other ultrasound images, for example, displayed in a highlighted manner.
  • the two ultrasound images of the fetus in the image acquisition step (S11) are displayed on the display 104, one of which contains the fetal head and the occipital bone to the forehead and nasal root region, and the other contains the fetus.
  • the processor 102 For the abdomen, according to the user's instruction to measure the head circumference of the fetus, the processor 102 performs the image recognition step (S13) according to the measurement head circumference selected by the user, and then converts the identified head occipital bone to the forehead nasal root.
  • the recognition image of the measurement target of the area is displayed on the display 104 in such a way that it is distinguished from the ultrasound image including the abdomen.
  • the image recognition step (S13) includes: comparing the image feature of the ultrasound image with the image feature of the database image containing the measurement item to be measured in the preset database, and judging the image feature of the ultrasound image and the image of the database image Whether the feature matches, when the image feature of the ultrasound image matches the image feature of the database image, it is determined that this ultrasound image is a recognition image containing the measurement target corresponding to the measurement item to be measured.
  • the database image included in the preset database is an image calibrated for the measurement items of the anatomical structure, and it contains measurement targets corresponding to the measurement items of the anatomical structure.
  • At least two ultrasound images related to the anatomical structure of the biological tissue are acquired in the image acquisition step (S11); the measurement item acquisition step (S12) acquires one measurement item of the aforementioned anatomical structure; the image recognition step (S13) Compare each ultrasound image acquired in the image acquisition step with the database image of the measurement target corresponding to the aforementioned measurement item in the preset database, and determine whether the image feature of the currently compared ultrasound image matches the image feature of the database image, If it matches, it is determined that the currently compared ultrasound image is an identification image; if it does not match, the ultrasound image is discarded, thereby determining the identification image containing the measurement target corresponding to the measurement item to be measured from the multiple ultrasound images.
  • the image acquisition step (S11) two ultrasound images of human organs are acquired.
  • Human organs include liver, kidney, etc., where the liver has a measurement item for liver size, and the kidney has a measurement for kidney size.
  • the measurement item to be measured acquired in the measurement item acquisition step (S12) is the size of the liver
  • each of the two ultrasound images of the human organs acquired in the image acquisition step (S11) Compare with the database image containing the liver in the preset database (also the ultrasound image about the liver), determine whether the image feature of the ultrasound image matches the image feature of the database image, if it matches, it will be determined as the recognition image, if it does not match , It will be discarded.
  • the image acquisition step (S11) two or more ultrasound images related to the anatomical structure of the biological tissue are acquired, the biological tissue has one or more anatomical structures, and one of the one or more anatomical structures There are two or more measurement items; the measurement item to be measured obtained in the measurement item acquisition step (S12) is two or more of the two or more measurement items of the aforementioned anatomical structure; the image recognition step (S13) also Including: classifying the recognition image to obtain the recognition image containing the measurement target corresponding to each measurement item to be measured.
  • the process of matching the image characteristics of the ultrasound image with the image characteristics of the database image It is also necessary to classify the recognition images according to the measurement items to be tested to obtain the recognition image of the measurement target corresponding to each measurement item to be tested.
  • the fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femoral length.
  • the measurement items to be measured acquired in (S12) are head circumference and abdominal circumference.
  • each of the ultrasound images of two or more fetuses of the fetus acquired in the image acquisition step (S11) is separately Compare with the database image of the head and abdomen (also about the ultrasound image of the fetus) in the preset database, where the comparison process is also based on the image features of the database image containing the head and the image feature of the database image containing the abdomen
  • the recognition images are classified to determine the head recognition image corresponding to the measurement item of head circumference and the abdomen recognition image corresponding to the measurement item of abdominal circumference.
  • a machine learning algorithm is used to learn the image features of the database images in the preset database that can distinguish different measurement items.
  • the machine learning method is used to extract the image features of the ultrasound image acquired in the image acquisition step (S11), and The learned image feature of the database image is matched with the image feature of the ultrasound image, and the ultrasound image matching the learned image feature is obtained as the recognition image.
  • the ultrasound images are classified according to the learned image characteristics that can distinguish different measurement items, so as to classify the ultrasound images according to the measurement items of the anatomical structure, so that the ultrasound images are classified with each other.
  • the corresponding recognition image of the measurement item to be tested is recognized.
  • the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc. Match the image features of the ultrasound images extracted by the machine learning algorithm with the image features in the preset database to classify the ultrasound images.
  • the classification discriminators used include but are not limited to nearest neighbor (KNN), Support vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
  • a deep learning method is used to construct a stacked convolutional layer and a fully connected layer to learn image features of database images in a preset database, and learn image features that can distinguish different measurement items.
  • Feature recognition is the recognition image in the ultrasound image.
  • the ultrasound images are classified according to the learned image characteristics, the images with the aforementioned measurement items that can be distinguished in the ultrasound images are recognized, and the recognized images are the recognition images.
  • Deep learning methods include, but are not limited to, VGG network, ResNet residual network, Inception module, AlexNe ot deep network, etc.
  • Step S14 an automatic measurement step, measuring the measurement target in the recognition image.
  • the measurement target in the recognition image is measured to obtain the measurement result of the anatomical structure.
  • the measurement methods are different. For example, in obstetric ultrasound testing, the measurement of head circumference usually uses an ellipse to wrap the fetal skull halo, and the abdominal circumference uses an ellipse to wrap Live the abdomen of the fetus, and measure the distance between the two ends of the femur with a line segment for the femur length.
  • the target fitting method is used for automatic measurement.
  • the automatic measurement step (S14) includes:
  • Step S141 Extract the contour of the measurement target corresponding to the measurement item to be measured by using an edge detection algorithm.
  • Edge detection algorithms include, but are not limited to: using Sobel operator, canny operator, etc., to detect the contour of the measurement item to be measured based on the pixel points and gray-scale weighted values of the ultrasound image.
  • Step S142 Fit the contour of the measurement target corresponding to the measurement item to be measured to obtain a fitting equation corresponding to the measurement item to be measured.
  • Detecting algorithms such as straight lines, circles, ellipses, etc. are used to fit the contours of the measurement items to be measured to obtain the fitting equations.
  • Fitting algorithms include but are not limited to least squares estimation, Hough transform, Randon transform, Ransac and other algorithms.
  • Step S143 Determine the measurement result through the fitting equation.
  • the measurement result is determined according to the fitting equation obtained by the fitting algorithm in the above steps. If the fitting equation circle or ellipse equation is obtained in the above steps, it is the result of automatic measurement. If the fitting equation obtained in the above steps is a straight line, the end point can be further located by combining the gray change of the end point of the measurement item to be measured to realize automatic measurement. Taking the measurement of femur length in obstetric ultrasound testing as an example, the femur appears as a bright linear structure. After detecting the straight line where the femur is located, two points with the largest gray gradient of the femur can be detected on the straight line as the two end points of the femur.
  • the measurement result measured in the automatic measurement step (S14) is displayed on the display 104.
  • the processor 102 performs an automatic measurement step (S14) on the recognition image recognized in the image recognition step (S13), and displays the measurement result on the recognition image displayed on the display 104. For example, when performing an obstetric ultrasound inspection, the processor 102 performs an image recognition step (S13) according to the measurement head circumference selected by the user, and then recognizes the head occipital bone to the forehead nasal root region corresponding to the measurement item including the head circumference.
  • the recognition image of this measurement target is displayed on the display 104 in a way that highlights the ultrasonic image of the measurement target that is different from the parietal region on both sides of the fetal head corresponding to the measurement item of double parietal diameter, and the subsequent automatic measurement step (The specific numerical value of the head circumference obtained in S14) is displayed in the upper right corner of the recognition image with the measurement item of head circumference recognized in the image recognition step (S13).
  • the method for automatically measuring anatomical structures by a processor provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating a measurement item to be measured.
  • a method for automatically measuring an anatomical structure based on the user designating a measurement item to be measured is provided.
  • the acquired ultrasound image of the anatomical structure automatically recognizes the recognition image containing the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement target in the recognition image.
  • the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image.
  • the measurement simplifies the operation process and improves the efficiency of measuring anatomical structures.
  • a positioning step is added after the image recognition step to eliminate the influence of the surrounding structure of the measurement target on the measurement result in the measurement step.
  • FIG. 4 a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application is shown, in which an image recognition step (S21), a measurement item acquisition step (S22), and an image recognition step (S23) are shown. It is consistent with the image recognition step (S11), the measurement item acquisition step (S12) and the image recognition step (S13) shown in FIG. 2, except that the positioning step (S24) is added after the image recognition step (S23), The measurement target is measured in the automatic measurement step (S25). The measurement target in the positioning step (S24) and the automatic measurement step shown in FIG. 4 will be described in detail below.
  • Step S24 a positioning step, positioning the measurement target in the recognition image.
  • the above image recognition step (S13) only the recognition image containing the measurement target corresponding to the measurement item to be measured is obtained, and the position corresponding to the measurement target in the actual measurement is not known, and the measurement target in the recognition image is directly determined.
  • the measurement requires the detection of the entire image, and the obtained edge detection result is easily affected by the structure around the measurement target. For this reason, in the positioning step (S24), the measurement target is positioned, and then the measurement target is fitted to the measurement target in the automatic measurement step, which can reduce the influence of the surrounding structure of the measurement item and make the measurement result more accurate.
  • the image feature of the recognition image is compared and analyzed with the image feature of the database image containing the measurement target corresponding to the measurement item to be measured in the preset database, so as to The measurement target is obtained by locating the measurement target in the recognition image, wherein the database image contains a calibration result corresponding to the measurement target, and the measurement target is an area consistent with the calibration result.
  • the calibration result includes the ROI box of the measurement target corresponding to the measurement target.
  • the calibration result includes the ROI box of the measurement target corresponding to the measurement item to be measured.
  • the positioning step includes: extracting the image features in the sliding window by using a method based on the sliding window, comparing the image features in the sliding window with the image features of the calibration result, and judging the difference between the image features in the sliding window and Whether the image feature of the calibration result matches, and when the image feature in the sliding window matches the image feature of the calibration result, it is determined that the current sliding window is the measurement target.
  • a machine learning algorithm is used to learn the image features in the ROI box of the calibration result of the database image in the preset database, where the learned image features in the ROI box are the measured The image feature of the calibration result that distinguishes the ROI area and non-ROI area of the target.
  • a machine learning algorithm is used to extract the image features in the sliding window obtained when the sliding window traversal is performed on the recognition image recognized in the image recognition step (S13).
  • the method of machine learning algorithm to extract features includes but not limited to principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
  • the calibration result includes the ROI frame of the measurement target corresponding to the measurement item to be measured
  • the positioning step includes: according to the calibration result in the database image containing the measurement target
  • the frame regression processing is performed on the recognized image to obtain a frame area, and the frame area is the measurement target.
  • a deep learning method is used to construct a stacked base convolutional layer and a fully connected layer in the ROI box of the calibration result of the database image containing the measurement target corresponding to the measurement item in the preset database.
  • Image feature learning and parameter regression the learned image feature in the ROI frame is the image feature of the calibration result that distinguishes the ROI area and the non-ROI area of the measurement target.
  • the neural network algorithm directly returns the border area of interest in the recognized image, and this border area is the measurement target to be measured.
  • neural network algorithms include but are not limited to R-CNN, Fast R-CNN, Faster-RCNN, SSD, YOLO and other detection algorithms.
  • the calibration result includes a mask for accurately segmenting the measurement target
  • the positioning step includes: according to image characteristics of the calibration result in the database image containing the measurement target , Using a semantic segmentation algorithm to identify the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image.
  • a deep learning method is used to perform end-to-end semantic segmentation to segment the recognition image. Specifically, construct a stacked base convolutional layer or use a deconvolutional layer to sample a mask that accurately divides the measurement target corresponding to the measurement item in the preset database, and directly obtain the AND contained in the recognition image according to the sampling result. The segmentation mask of the measurement target corresponding to the measurement item to be tested.
  • measuring a measurement target (S25) includes performing target fitting on the measurement target to obtain a fitting equation of the measurement target; and determining the measurement result of the measurement target through the fitting equation.
  • the calibration result in the positioning step (S24) includes the ROI frame of the measurement target corresponding to the measurement item to be measured.
  • the frame area of the measurement target is located in the recognition image.
  • Target fitting is performed on the measurement target inside, and the fitted line, circle or ellipse equations are obtained, and the measurement results are obtained by calculating the aforementioned equations. Performing target fitting on the measurement target in the frame area of the measurement target can reduce the interference of other structures outside the frame area on the target fitting, and improve the accuracy of the measurement result.
  • the calibration result in the positioning step (S24) includes a mask for accurately segmenting the measurement target corresponding to the measurement item, and the recognition image is compared with the calibration in the process of measuring the measurement target (S25).
  • the edge of the segmented mask of the measurement target with consistent results is fitted to the target, and the equations such as a straight line, a circle or an ellipse are fitted, and the measurement result is obtained by calculating the foregoing equations.
  • Target fitting is performed on the edge of the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image, which can reduce the fitting error of the target fitting and improve the accuracy of the measurement result.
  • This embodiment provides a method for automatically measuring an anatomical structure based on the user designating a measurement item to be measured.
  • the acquired ultrasound image of the anatomical structure automatically recognizes the identification image containing the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement item in the identification image.
  • the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image.
  • the measurement simplifies the operation process and improves the efficiency of measuring anatomical structures.
  • an automatic measurement method of an anatomical structure provides an exemplary introduction to a method for automatic measurement of an anatomical structure based on a user designating a measurement item to be measured.
  • This embodiment provides a method for measuring all the measurability of the anatomical structure based on the user not specifying the measurement item to be measured. The user does not need to specify.
  • the ultrasound image of the anatomical structure is obtained, the ultrasound image is directly automatically recognized to identify the recognition image containing the measurement target, and the measurement target in the recognition image is automatically measured, where the measurement target corresponds to The measurement items of the anatomical structure.
  • the user only does not need to perform any operation on the measurement items, which further simplifies the operation process and improves the efficiency of measuring the anatomical structure.
  • FIG. 5 there is shown a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application.
  • the automatic measurement method of anatomical structure is used to automatically measure the anatomical structure of the biological tissue to be tested after processing the ultrasonic echo, as shown in FIG. 5.
  • the method includes:
  • Step S31 an image acquisition step, acquiring at least two ultrasound images, at least one of the ultrasound images being related to at least one anatomical structure of the biological tissue.
  • the automatic measurement method of anatomical structure provided in this embodiment is used to identify an ultrasonic image containing a measurement target from at least two ultrasonic images related to the anatomical structure of a biological tissue, and measure the measurement target in the ultrasonic image, wherein the measurement The target corresponds to the measurement items possessed by the anatomical structure.
  • the measurement items of the anatomical structure are measured as the measurement items to be measured.
  • the measurement items to be measured are often specified by the user, and the user manually recognizes the ultrasound image containing the measurement target corresponding to the measurement item to be measured and then performs the measurement. This process increases the need for the user to specify the measurement item operation; because the measurement item that the user wants to measure is often fixed, it is necessary to measure all the measurement items of the anatomical structure.
  • the anatomy is not specified based on the user. All measurement items of the structure are measured, which reduces the user's operation steps and simplifies the measurement operation of the anatomical structure.
  • Step S32 an image recognition step, recognizing a recognition image containing a measurement target corresponding to a measurement item from the ultrasound image, wherein the recognition image is at least one of the ultrasound images. Since this embodiment measures all the measurement items that the anatomical structure has, the measurement targets in the identification image of the measurement target corresponding to the measurement item that contains the anatomical structure identified from the ultrasound image need to be measured, that is, the measurement of the anatomical structure All items are to-be-tested measurement items.
  • the at least one anatomical structure of the biological tissue has at least one feature measurement item
  • the image recognition step (S32) includes: combining the image feature of the ultrasound image with a preset database containing the at least one feature.
  • the image feature of the database image of the measurement target corresponding to any one of the feature measurement items is compared to determine whether the image feature of the ultrasound image matches the image feature of the database image, and when the image feature of the ultrasound image matches the image feature of the database When the image features of the image are matched, it is determined that the ultrasound image is the recognition image, the measurement item corresponding to the measurement target contained in the recognition image is the measurement item to be measured, and the measurement target corresponds to the recognition image The image feature matches the measurement target contained in the database image.
  • the image acquisition step (S31) acquires each of the two ultrasound images and the database image containing the liver size in the preset database (also about the liver).
  • the ultrasound image of the liver) is compared to determine whether the image feature of the liver ultrasound image matches the image feature of the database image. If it matches, it will be determined as a recognition image. If it does not match, it will be discarded; the matching recognition image contains The liver of is the measurement target, and the characteristic measurement item corresponding to the size of the liver is used as the measurement item to be measured.
  • any one or two of the at least one anatomical structure of the biological tissue has at least two characteristic measurement items
  • the identification image contains at least two measurement targets corresponding to the at least two measurement items to be measured, respectively
  • the image recognition step (S32) includes: classifying the at least two measurement targets in the recognition image, so that each of the at least two measurement targets respectively corresponds to the at least two feature measurement One of the items.
  • the image recognition step (S32) recognizes the recognition image containing multiple measurement targets, it is necessary to The measurement targets contained in the recognition image are classified, that is, the measurement items contained in the recognition image are classified according to the at least two measurement items of the anatomical structure, and one-to-one correspondence with at least two measurement items of the anatomical structure is obtained. Measurement target.
  • two or more ultrasound images about the fetus are acquired in the image acquisition step (S31).
  • the fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femur length.
  • Each of the acquired ultrasound images of two or more fetuses of the fetus is respectively a database containing measurement targets corresponding to at least one of double parietal diameter, head circumference, abdominal circumference, and femoral length in the preset database
  • the images (also about the ultrasound images of the fetus) are compared to determine the recognition image.
  • the double parietal diameter and head circumference correspond to the measurement target (the double parietal diameter corresponds to this measurement target of the parietal area on both sides of the fetal head
  • the measurement target from the occipital bone to the forehead nasal root area corresponding to the head circumference often appears in the same ultrasound image, and the identification of the measurement target corresponding to the two measurement items including double parietal diameter and head circumference is determined. After the image is imaged, it is necessary to further distinguish the measurement targets corresponding to the two measurement items to be measured in the recognition image to determine which measurement target corresponds to the head circumference and which measurement target corresponds to the double parietal diameter in the recognition image.
  • a machine learning algorithm is used to learn the image features of the database images in the preset database that can distinguish different measurement items.
  • the machine learning method is used to extract the image features of the ultrasound image acquired in the image acquisition step (S31), and The learned image feature of the database image is matched with the image feature of the ultrasound image, and the ultrasound image matching the learned image feature is obtained as the recognition image.
  • the measurement targets are classified according to the learned image features that can distinguish different measurement items, so as to realize the recognition of the measurement targets in the recognition image according to the measurement items possessed by the anatomical structure.
  • the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
  • PCA Principal component analysis
  • LDA Linear Discriminant Analysis
  • Harr feature extraction method texture feature extraction method, etc.
  • the classification discriminators used include but are not limited to nearest neighbor classification (K nearest neighbor, KNN), support Vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
  • a deep learning method is used to construct a stacked convolutional layer and a fully connected layer to learn image features of database images in a preset database, and learn image features that can distinguish different measurement targets.
  • Feature recognition is the recognition image in ultrasound images.
  • the measurement targets in the recognition image are classified according to the learned image features.
  • deep learning methods include, but are not limited to, VGG network, ResNet residual network, Inception module, AlexNe ot deep network, etc.
  • Step S25 an automatic measurement step, measuring the measurement target in the recognition image.
  • the measurement target in the recognition image is measured to obtain the measurement result of the anatomical structure.
  • the edge detection is directly performed on the measurement target in the recognition image. The method of performing target fitting to obtain the target fitting equation is used for automatic measurement.
  • the automatic measurement step (S25) includes:
  • the contour corresponding to the measurement target is extracted by an edge detection algorithm.
  • Edge detection algorithms include, but are not limited to: using Sobel operator, canny operator, etc. to detect the contour of the measurement target based on the pixel points and gray-scale weighted values of the ultrasound image.
  • Fitting the contour corresponding to the measurement target to obtain a fitting equation corresponding to the measurement target Detecting algorithms such as lines, circles, ellipses, etc. are used to fit the contour of the measurement item to be measured to obtain a fitting equation.
  • the fitting algorithms include but are not limited to least squares estimation, Hough transform, Randon transform, Ransac and other algorithms.
  • the measurement result of the measurement target is determined by the fitting equation.
  • the measurement result is determined according to the fitting equation obtained by the fitting algorithm in the above steps. If the fitting equation circle or ellipse equation is obtained in the above steps, it is the result of automatic measurement. If the fitting equation obtained in the above steps is a straight line, the end point can be further located in conjunction with the gray change of the end point of the measurement target to realize automatic measurement. Taking the measurement of femur length in obstetric ultrasound testing as an example, the femur appears as a bright linear structure. After detecting the straight line where the femur is located, two points with the largest gray gradient of the femur can be detected on the straight line as the two end points of the femur.
  • the method for automatically measuring anatomical structures performed by the processor of this embodiment provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating the measurement item to be measured. .
  • This embodiment provides a method for measuring all the measurability of the anatomical structure based on the user not specifying the measurement item to be measured. The user does not need to specify.
  • the ultrasound image of the anatomical structure is obtained, the ultrasound image is automatically recognized to identify the recognition image containing the measurement target, and the measurement target in the recognition image is automatically measured, where the measurement target corresponds to The measurement items of the anatomical structure.
  • the user does not need to perform any operation on the measurement items, which further simplifies the operation process and improves the efficiency of measuring the anatomical structure.
  • a positioning step is added after the image recognition step to eliminate the influence of the surrounding structure of the measurement target on the measurement result in the measurement step.
  • FIG. 6 there is shown a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application.
  • the image recognition step (S41) and the image recognition step (S42) are consistent with the image recognition step (S31) and the image recognition step (S32) shown in FIG. 5.
  • the difference is that after the image recognition step (S42) Add a positioning step (S43), and then perform an automatic measurement step (S44).
  • Step S43 a positioning step, positioning the measurement target in the recognition image.
  • the image recognition step (S42) only the recognition image containing the measurement target is obtained, and the position of the measurement target in the actual measurement is not known. Directly measuring the measurement items in the recognition image requires detecting the entire image, and the obtained edge detection results are easily affected by the structure around the measurement items. For this reason, in the positioning step (S43), the measurement target in the recognition image is located, and then the measurement target is fitted to the measurement target in the automatic measurement step, which can reduce the influence of the structure around the measurement target and make the measurement result more accurate .
  • the image feature of the recognition image is compared and analyzed with the image feature of the database image corresponding to the measurement item to be measured in the preset database, so as to compare all the features of the image.
  • the measurement target in the recognition image is positioned, wherein the database image contains a calibration result corresponding to the target to be measured, and the measurement target is an area consistent with the calibration result.
  • the calibration result includes the ROI box of the measurement target.
  • the calibration result includes the ROI box of the measurement target corresponding to the measurement item to be measured.
  • the positioning step includes: extracting the image features in the sliding window by using a method based on the sliding window, comparing the image features in the sliding window with the image features of the calibration result, and judging the difference between the image features in the sliding window and Whether the image feature of the calibration result matches, and when the image feature in the sliding window matches the image feature of the calibration result, it is determined that the current sliding window is the measurement target.
  • a machine learning algorithm is used to learn the image features in the ROI frame of the calibration result of the database image in the preset database, wherein the learned image feature in the ROI frame is the measurement The image feature of the calibration result that distinguishes the ROI area and non-ROI area of the target.
  • a machine learning algorithm is used to extract the image features in the sliding window obtained when the sliding window traversal is performed on the recognition image recognized in the image recognition step (S13).
  • the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
  • Principal component analysis Principal Components Analysis, PCA
  • linear discriminant analysis Linear Discriminant Analysis, LDA
  • Harr feature extraction method texture feature extraction method, etc.
  • the calibration result includes the ROI frame of the measurement target corresponding to the measurement item to be measured
  • the positioning step includes: according to the database containing the measurement target corresponding to the measurement item to be measured
  • frame regression processing is performed on the recognition image to obtain a frame area, and the frame area is the measurement target.
  • a deep learning method is used to construct a stacked base convolutional layer and a fully connected layer to learn and parameterize the image features in the ROI frame of the calibration result of the database image corresponding to the measurement item to be measured in the preset database
  • the learned image feature in the ROI frame is the image feature of the calibration result that distinguishes the ROI area of the measurement target from the non-ROI area.
  • the neural network algorithm directly returns the border area of interest in the recognized image, and this border area is the measurement target to be measured.
  • neural network algorithms include but are not limited to R-CNN, Fast R-CNN, Faster-RCNN, SSD, YOLO and other target detection algorithms.
  • the calibration result includes a mask for accurately segmenting the measurement target corresponding to the measurement item
  • the positioning step includes: according to the measurement target corresponding to the measurement item to be measured
  • a semantic segmentation algorithm is used to identify the segmentation mask of the measurement target in the recognition image that is consistent with the calibration result.
  • an end-to-end semantic segmentation network method based on a deep learning method is used to perform network segmentation on the recognition image. Specifically, construct a stacked base convolution layer or use a deconvolution layer to sample the mask for precise network segmentation of the calibration area corresponding to the measurement target in the preset database, and directly obtain the contained image from the recognition image according to the sampling result.
  • Semantic segmentation networks used include, but are not limited to, Fully Convolutional Networks (FCN), U-Net Convolutional Networks (U-Net Convolutional Networks), etc.
  • the recognition image obtained in the image recognition step (S42) includes at least two measurement targets corresponding to the measurement items to be measured
  • the positioning step (S43) further includes: Classification is performed so that the measurement target corresponds to the at least two measurement items to be measured in a one-to-one correspondence. Since the recognition image obtained in the image recognition step (S42) includes at least two measurement targets corresponding to the measurement items to be measured, it is not only necessary to locate the position of the measurement target in the recognition image during the process of locating the measurement items. It also distinguishes the measurement target category to which each location belongs, and obtains the measurement item corresponding to the measurement result after the measurement target is automatically measured in the subsequent automatic measurement step.
  • the recognition image of the fetus recognized in the image recognition step (S42) also contains the measurement target of the occipital bone of the fetal head corresponding to the head circumference and the fetal head corresponding to the double parietal diameter.
  • the two measurement targets of the parietal area on both sides of the fetus During the measurement process after positioning, it is not known whether the measurement is head circumference or double parietal diameter. It is necessary to measure the occipital bone of the fetal head to the forehead nasal root area. Distinguish this measurement target from the parietal area on both sides of the fetal head, and obtain measurement targets corresponding to the head circumference and double parietal diameter respectively.
  • the image feature of the measurement target is compared with the image feature of the database image that is calibrated for the measurement target corresponding to the measurement item to be measured in a preset database, to Determine whether the image feature of the measurement target matches the image feature of the calibration result, and when the image feature of the measurement target matches the image feature of the calibration result, it is determined that the measurement target corresponds to the to-be-measured Measurement items.
  • the image of each measurement target determined in the recognition image of the fetus recognized in the positioning step (S43) The feature is compared with the image feature of the calibration result of the head circumference calibration that contains the database image of the head circumference in the preset database.
  • the image feature of the measurement target matches the image feature of the calibration result of the head circumference calibration, the current is judged
  • the measurement target for comparison is the head circumference.
  • the machine learning algorithm is collected to learn the image features of the database image in the preset database to distinguish different calibration areas, and at the same time, the machine learning method is used to extract the image of the measurement target in the identification image positioned in the positioning step (S43) Feature: Match the learned image feature of the database image with the image feature of the measurement target, and obtain the measurement target matching the learned image feature corresponding to the measurement target of the current calibration area.
  • the measurement targets are classified according to the learned image features that can distinguish different calibration areas, so that the measurement items in the recognition image will be recognized according to the measurement items of the anatomical structure Items are recognized.
  • the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
  • PCA Principal component analysis
  • LDA Linear Discriminant Analysis
  • Harr feature extraction method texture feature extraction method
  • texture feature extraction method etc.
  • the classification discriminator used includes but not limited to the nearest neighbor classification (K nearest neighbor, KNN), support Vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
  • the step of measuring a measurement target includes performing target fitting on the measurement target to obtain a fitting equation of the measurement target; and determining the measurement result of the measurement target through the fitting equation.
  • the calibration result in the positioning step (S43) includes the ROI frame of the measurement target corresponding to the measurement item to be measured.
  • the automatic measurement step (S44) measures the measurement target in the process of measuring the measurement target.
  • Target fitting is performed on the measurement target within the frame area of the measurement target to obtain the fitted line, circle, or ellipse equation, and the measurement result is obtained by calculating the foregoing equation.
  • Performing target fitting on the measurement target in the frame area of the measurement target can reduce the interference of other structures outside the frame area on the target fitting, and improve the accuracy of the measurement result.
  • the calibration result in the positioning step (S43) includes a mask for accurately segmenting the measurement target corresponding to the measurement item.
  • the measurement target is measured.
  • Target fitting is performed on the edge of the segmentation mask of the measurement target consistent with the calibration result in the recognition image, and the equations such as straight line, circle or ellipse are fitted, and the measurement result is obtained by calculating the foregoing equation.
  • Target fitting is performed on the edge of the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image, which can reduce the fitting error of the target fitting and improve the accuracy of the measurement result.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the various component embodiments of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor may be used in practice to implement some or all of the functions of some modules according to the embodiments of the present application.
  • This application can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Gynecology & Obstetrics (AREA)
  • Pregnancy & Childbirth (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

An automatic measurement method and an ultrasonic testing system for an anatomical structure. The automatic measurement method comprises: an image acquisition step (11), involving: acquiring an ultrasound image, wherein the ultrasound image is associated with at least one anatomical structure of a living body tissue, and the at least one anatomical structure has at least one measurement item; a measurement item acquisition step (12), involving: acquiring a measurement item to be measured; an image recognition step (13), involving: recognizing, from the ultrasound image, a recognition image containing a measurement target corresponding to said measurement item; and an automatic measurement step (14), involving: measuring the measurement target in the recognition image. An ultrasound image containing a measurement target corresponding to a measurement item to be measured is recognized according to said measurement item needing to be measured, and the measurement target in the ultrasound image is measured automatically, such that the efficiency of measurement of an anatomical structure is improved.

Description

解剖结构的自动测量方法和超声成像系统Automatic measurement method of anatomical structure and ultrasound imaging system
说明书Manual
技术领域Technical field
本申请涉及医疗器械领域,更具体地涉及一种解剖结构的自动测量方法和超声成像系统。This application relates to the field of medical equipment, and more specifically to an automatic measurement method of anatomical structure and an ultrasound imaging system.
背景技术Background technique
超声测量是获取组织或病灶大小的常用手段,为了提升测量效果,已有多个超声厂家集成了自动测量算法,例如,针对产科测量,多个厂家支持了头围、双顶径、腹围、股骨长等常用测量项目的自动测量,给临床检查效率的提升作出了较大贡献。Ultrasound measurement is a common method to obtain the size of tissues or lesions. In order to improve the measurement effect, many ultrasound manufacturers have integrated automatic measurement algorithms. For example, for obstetric measurement, many manufacturers support head circumference, double parietal diameter, abdominal circumference, The automatic measurement of commonly used measurement items such as femoral length has made a great contribution to the improvement of clinical examination efficiency.
然而,在超声检查中,往往涉及图像的对比,为了方便图像对比,多窗模式是一种常用的图像显示方式。在多窗模式中,通过一个屏幕中同时显示了多个超声图像(主要以双窗为主)可以让医生对比观察不图的解剖结构。一般情况下,多窗模式会有一个切换按键让用户激活某一个窗口(后称激活窗口),在该窗口下实时扫图,其余窗口显示之前已扫好的图。However, in ultrasound inspection, image comparison is often involved. In order to facilitate image comparison, multi-window mode is a commonly used image display method. In the multi-window mode, multiple ultrasound images (mainly dual-windows) can be displayed on one screen at the same time, allowing doctors to compare and observe different anatomical structures. Generally, in the multi-window mode, there will be a switch button that allows the user to activate a certain window (hereinafter referred to as the active window), and the image will be scanned in this window in real time, and the remaining windows will display the scanned image.
然而,在自动测量时,多窗模式往往会带来很大的困扰,当用户启动某个测量项的自动测量时,系统并不知道哪一个窗口的图像是用户想要进行测量的,一个通用做法是多窗模式哪一个窗口是当前激活窗口,就自动测量哪一个窗口的图像,这就要求用户扫好一个图像就马上进行测量,否者自动测量的图像并不是医生想要的图像。但是,部分医生习惯于打好所有切面后再统一进行测量,这就要求用户切换窗口才能测量,反而增加操作步骤。However, in automatic measurement, the multi-window mode often brings great troubles. When the user starts the automatic measurement of a certain measurement item, the system does not know which window image the user wants to measure. The method is that in the multi-window mode, which window is the current active window, the image of which window is automatically measured. This requires the user to scan an image and immediately perform the measurement, otherwise the automatically measured image is not the image that the doctor wants. However, some doctors are accustomed to lay out all the slices and then perform unified measurements. This requires the user to switch windows to measure, but increases the operation steps.
发明内容Summary of the invention
为了解决上述问题中的至少一个而提出了本申请。本申请提供一种解剖结构的自动测量方法和超声成像系统,其可以根据要测量的待测测量项, 获取具有待测测量项的超声图像,对超声图像中的待测测量项进行自动测量,提高了对解剖结构进行自动测量的效率。This application is made in order to solve at least one of the above-mentioned problems. The present application provides an automatic measurement method and ultrasound imaging system for anatomical structures, which can obtain an ultrasound image with the measurement item to be measured according to the measurement item to be measured, and perform automatic measurement on the measurement item to be measured in the ultrasound image, Improve the efficiency of automatic measurement of anatomical structures.
第一方面,本申请实施例提供一种解剖结构的自动测量方法,包括:In the first aspect, an embodiment of the present application provides an automatic measurement method of an anatomical structure, including:
图像获取步骤,获取超声图像,所述超声图像与生物体组织的至少一个解剖结构相关,所述至少一个解剖结构具有至少一个测量项;In an image acquisition step, an ultrasound image is acquired, the ultrasound image is related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item;
测量项获取步骤,获取所述解剖结构的待测测量项;The measurement item obtaining step is to obtain the measurement item to be measured of the anatomical structure;
图像识别步骤,从所述超声图像中识别出包含有所述待测测量项对应的测量目标的识别图像,所述识别图像为所述超声图像中的至少一个图像;An image recognition step of identifying, from the ultrasound image, a recognition image containing a measurement target corresponding to the measurement item to be measured, and the recognition image is at least one of the ultrasound images;
定位步骤,在所述识别图像中对所述测量目标进行定位;A positioning step, positioning the measurement target in the recognition image;
自动测量步骤,测量所述测量目标。In an automatic measurement step, the measurement target is measured.
第二方面,本申请实施例还提供一种解剖结构的自动测量方法,包括:In the second aspect, an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
图像获取步骤,获取超声图像,所述超声图像与生物体组织的至少一个解剖结构相关,所述至少一个解剖结构具有至少一个测量项;In an image acquisition step, an ultrasound image is acquired, the ultrasound image is related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item;
测量项获取步骤,获取所述解剖结构的待测测量项;The measurement item obtaining step is to obtain the measurement item to be measured of the anatomical structure;
图像识别步骤,从所述超声图像中识别出包含有所述待测测量项对应的测量目标的识别图像,其中,所述识别图像为所述超声图像中的至少一个图像;An image recognition step of identifying, from the ultrasound image, a recognition image containing a measurement target corresponding to the measurement item to be measured, wherein the recognition image is at least one of the ultrasound images;
自动测量步骤,测量所述识别图像中的所述测量目标。In an automatic measurement step, the measurement target in the recognition image is measured.
第三方面,本申请实施例还提供一种解剖结构的自动测量方法,包括:In a third aspect, an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
图像获取步骤,获取至少两个超声图像,所述至少两个超声图像中的至少一个与生物体组织的至少一个解剖结构相关;An image acquisition step of acquiring at least two ultrasound images, at least one of the at least two ultrasound images being related to at least one anatomical structure of the biological tissue;
图像识别步骤,从所述超声图像中识别出包含有测量项对应的测量目标的识别图像,其中,所述测量项为待测测量项,所述识别图像为所述超声图像中的至少一个图像;In an image recognition step, a recognition image containing a measurement target corresponding to a measurement item is recognized from the ultrasound image, wherein the measurement item is a measurement item to be measured, and the recognition image is at least one image in the ultrasound image ;
定位步骤,在所述识别图像中对所述测量目标进行定位。The positioning step is to locate the measurement target in the recognition image.
自动测量步骤,测量所述测量目标。In an automatic measurement step, the measurement target is measured.
第四方面,本申请实施例还提供一种解剖结构的自动测量方法,包括:In a fourth aspect, an embodiment of the present application also provides an automatic measurement method of anatomical structure, including:
图像获取步骤,获取至少两个超声图像,所述至少两个超声图像中的至少一个与生物体组织的至少一个解剖结构相关;An image acquisition step of acquiring at least two ultrasound images, at least one of the at least two ultrasound images being related to at least one anatomical structure of the biological tissue;
图像识别步骤,从所述超声图像中识别出包含有测量项对应的测量目 标的识别图像,其中,所述测量项为待测测量项,所述识别图像为所述超声图像中的至少一个图像;In an image recognition step, a recognition image containing a measurement target corresponding to a measurement item is recognized from the ultrasound image, wherein the measurement item is a measurement item to be measured, and the recognition image is at least one image in the ultrasound image ;
自动测量步骤,测量所述识别图像中的所述测量目标。In an automatic measurement step, the measurement target in the recognition image is measured.
本申请实施例还提供一种超声成像系统,包括:An embodiment of the present application also provides an ultrasound imaging system, including:
超声探头,用于向生物体组织发射超声波并接收超声回波,得到超声回波信号;Ultrasound probe, used to transmit ultrasonic waves to biological tissues and receive ultrasonic echoes to obtain ultrasonic echo signals;
处理器,用于对超声回波信号进行处理,得到所述生物体组织的超声图像;A processor, configured to process the ultrasonic echo signal to obtain an ultrasonic image of the biological tissue;
显示器,用以显示所述超声图像;A display for displaying the ultrasound image;
存储器,用以存储可执行的程序指令;The memory is used to store executable program instructions;
以及处理器,用以执行所述可执行的程序指令,以使所述处理器执行上述第一方面至第四方面任意一方面所述的自动测量方法。And a processor, configured to execute the executable program instructions, so that the processor executes the automatic measurement method described in any one of the first aspect to the fourth aspect.
本申请实施例提供了一种解剖结构的自动测量方法、超声成像系统,根据要测量的待测测量项,识别包含有待测测量项的超声图像,并对超声图像中的待测测量项进行自动测量,提高了对解剖结构进行测量的效率。The embodiments of the present application provide an automatic measurement method of anatomical structure and an ultrasound imaging system. According to the measurement item to be measured, the ultrasound image containing the measurement item to be measured is identified, and the measurement item to be measured in the ultrasound image is identified. Automatic measurement improves the efficiency of measuring anatomical structures.
附图说明Description of the drawings
图1示出根据本申请一实施例的超声成像系统的示意性框图;Fig. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application;
图2示出根据本申请一实施例的解剖结构的自动测量方法的示意性流程图;FIG. 2 shows a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application;
图3示出根据本申请一实施例的解剖结构的自动测量方法中自动测量步骤的示意性流程图;Fig. 3 shows a schematic flowchart of an automatic measurement step in an automatic measurement method of an anatomical structure according to an embodiment of the present application;
图4示出根据本申请一实施例的解剖结构的自动测量方法的示意性流程图;Fig. 4 shows a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application;
图5示出根据本申请一实施例的解剖结构的自动测量方法的示意性流程图;Fig. 5 shows a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application;
图6示出根据本申请一实施例的解剖结构的自动测量方法的示意性流程图。Fig. 6 shows a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application.
具体实施方式Detailed ways
为了使得本申请的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。基于本申请中描述的本申请实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本申请的保护范围之内。In order to make the objectives, technical solutions, and advantages of the present application more obvious, the exemplary embodiments according to the present application will be described in detail below with reference to the accompanying drawings. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments of the present application, and it should be understood that the present application is not limited by the exemplary embodiments described herein. Based on the embodiments of this application described in this application, all other embodiments obtained by those skilled in the art without creative work should fall within the protection scope of this application.
在下文的描述中,给出了大量具体的细节以便提供对本申请更为彻底的理解。然而,对于本领域技术人员而言显而易见的是,本申请可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本申请发生混淆,对于本领域公知的一些技术特征未进行描述。In the following description, a lot of specific details are given in order to provide a more thorough understanding of this application. However, it is obvious to those skilled in the art that this application can be implemented without one or more of these details. In other examples, in order to avoid confusion with this application, some technical features known in the art are not described.
应当理解的是,本申请能够以不同形式实施,而不应当解释为局限于这里提出的实施例。相反地,提供这些实施例将使公开彻底和完全,并且将本申请的范围完全地传递给本领域技术人员。It should be understood that this application can be implemented in different forms and should not be construed as being limited to the embodiments presented here. On the contrary, the provision of these embodiments will make the disclosure thorough and complete, and will fully convey the scope of the present application to those skilled in the art.
在此使用的术语的目的仅在于描述具体实施例并且不作为本申请的限制。在此使用时,单数形式的“一”、“一个”和“所述/该”也意图包括复数形式,除非上下文清楚指出另外的方式。还应明白术语“组成”和/或“包括”,当在该说明书中使用时,确定所述特征、整数、步骤、操作、元件和/或部件的存在,但不排除一个或更多其它的特征、整数、步骤、操作、元件、部件和/或组的存在或添加。在此使用时,术语“和/或”包括相关所列项目的任何及所有组合。The purpose of the terms used here is only to describe specific embodiments and not as a limitation of the present application. When used herein, the singular forms "a", "an" and "the/the" are also intended to include plural forms, unless the context clearly indicates otherwise. It should also be understood that the terms "composition" and/or "including", when used in this specification, determine the existence of the described features, integers, steps, operations, elements and/or components, but do not exclude one or more other The existence or addition of features, integers, steps, operations, elements, parts, and/or groups. As used herein, the term "and/or" includes any and all combinations of related listed items.
为了彻底理解本申请,将在下列的描述中提出详细的步骤以及详细的结构,以便阐释本申请提出的技术方案,然而除了这些详细描述外,本申请还可以具有其他实施方式。In order to thoroughly understand this application, detailed steps and detailed structures will be presented in the following description to explain the technical solutions proposed by this application. However, in addition to these detailed descriptions, this application may also have other implementations.
图1示出根据本申请一实施例的超声成像系统的示意性框图。如图1所示,本实施例提供的具备超声成像系统100包括超声探头101、处理器102,存储器103以及显示器104。超声探头101用于向生物体组织发射超声波并接收超声回波,得到超声回波信号。处理器102用于对超声回波信号进行处理,得到生物体组织的解剖结构的超声图像,并基于该超声图像对生物组织的解剖结构进行自动测量。Fig. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application. As shown in FIG. 1, the ultrasound imaging system 100 provided in this embodiment includes an ultrasound probe 101, a processor 102, a memory 103 and a display 104. The ultrasonic probe 101 is used to transmit ultrasonic waves to biological tissues and receive ultrasonic echoes to obtain ultrasonic echo signals. The processor 102 is configured to process the ultrasound echo signals to obtain an ultrasound image of the anatomical structure of the biological tissue, and automatically measure the anatomical structure of the biological tissue based on the ultrasound image.
存储器103存储有可执行的计算机程序指令,当处理器102执行所述可执行的计算机程序指令时,处理器102执行解剖结构的自动测量,得到测量结果,例如,测量目标的测量结果。显示器104用于显示超声图像,以及处理器测量的测量结果以及测量目标,待测测量项等。The memory 103 stores executable computer program instructions. When the processor 102 executes the executable computer program instructions, the processor 102 performs automatic measurement of the anatomical structure to obtain a measurement result, for example, a measurement result of a measurement target. The display 104 is used to display the ultrasound image, the measurement result measured by the processor, the measurement target, the measurement item to be measured, and the like.
参看图2对根据本申请的一个实施例的本实施例处理器进行解剖结构的自动测量方法提供在基于用户指定了待测测量项的情况下,对解剖结构进行自动测量的方法进行示例性介绍。在本实施例中,根据用户指定的待测测量项,从获取的解剖结构的超声图像中自动识别出具有待测测量项对应的测量目标的识别图像后直接对识别图像中的测量目标进行测量。整个过程中,仅仅需要用户指定需要测量的待测测量项,不需要用户根据待测测量项识别包含待测测量项对应的测量目标的超声图像,更不需要对超声图像中的测量目标进行手动测量,简化操作过程,提高了对解剖结构进行测量的效率。Referring to FIG. 2, the method for automatically measuring anatomical structures performed by the processor of this embodiment according to an embodiment of the present application provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating the measurement item to be measured. . In this embodiment, according to the measurement item specified by the user, the acquired ultrasound image of the anatomical structure automatically recognizes the recognition image with the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement target in the recognition image. . In the whole process, the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image. The measurement simplifies the operation process and improves the efficiency of measuring anatomical structures.
参看图2,示出根据本申请一实施例的解剖结构的自动测量方法的示意性流程图。Referring to FIG. 2, a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application is shown.
本实施例提供的解剖结构的自动测量方法,用于对超声回波进行处理后对生物组织的解剖结构进行自动测量,如图2所示。该方法包括:The automatic measurement method of anatomical structure provided in this embodiment is used to automatically measure the anatomical structure of biological tissue after processing the ultrasonic echo, as shown in FIG. 2. The method includes:
步骤S11:图像获取步骤,获取超声图像,所述超声图像与生物体组织的至少一个解剖结构相关,所述至少一个解剖结构具有至少一个测量项。在对生物体组织进行超声检测时,往往需要观测生物组织的解剖结构,在需要对解剖结构进行测量的情况下,解剖结构具有至少一个测量项,例如,在进行产科超声检测时,观测到孕妇体内胎儿的解剖结构,其往往对应需要测量的双顶径、头围、腹围、股骨长等测量项;在进行腹部超声检测时,观测到被检测人员腹部的肝脏、肾脏等的解剖结构,其分别对应肝脏大小、肾脏大小的测量项。在其他实施例中,对应的生物组织的解剖结构有不同的测量项。Step S11: an image acquisition step, acquiring an ultrasound image, the ultrasound image being related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item. When performing ultrasonic testing of biological tissues, it is often necessary to observe the anatomical structure of biological tissues. In the case of measuring anatomical structures, the anatomical structure has at least one measurement item. For example, during obstetric ultrasound testing, pregnant women are observed The anatomical structure of the fetus in the body often corresponds to the measurement items such as double parietal diameter, head circumference, abdominal circumference, and femoral length that need to be measured; during abdominal ultrasound examination, the anatomical structure of the liver and kidney of the subject’s abdomen is observed. They correspond to the measurement items of liver size and kidney size respectively. In other embodiments, the anatomical structure of the corresponding biological tissue has different measurement items.
在一个实施例中,图像获取步骤中,处理器102对超声探头101获取的超声回波进行处理,生成超声图像。在一个实施例中,处理器102对超声探头101获取的超声回波进行处理,生成超声图像以保存在存储器103 中,在进行图像获取步骤时从已经存储在存储器103中的超声图像中获取关于生物体的解剖结构的超声图像,并将超声图像和处理结果显示在显示器104上。在一个实施例中,图像获取步骤中,获取至少一个关于解剖结构的超声图像。在一个实施例中,显示器104上显示关于生物体组织的解剖结构的一个或多个超声图像。例如在显示器104上同时显示关于胎儿头部的两个超声图像。在一个实施例中,显示器104上显示关于生物体组织的不同解剖结构的超声图像,例如,在显示器104上同时显示关于胎儿头部和胎儿腹部的超声图像。在一个实施例中,显示器104上具有多个显示窗口,每个显示窗口显示关于解剖结构的一个或多个超声图像,例如,在显示器104上具有两个显示窗口,其中一个显示窗口显示关于胎儿头部的超声图像,其中一个显示窗口显示关于胎儿腹部的超声图像。在一个实施例中,显示器104上显示超声图像的同时,还可以将关于所述解剖结构的测量项以及测量结果显示出来。In one embodiment, in the image acquisition step, the processor 102 processes the ultrasonic echo acquired by the ultrasonic probe 101 to generate an ultrasonic image. In one embodiment, the processor 102 processes the ultrasonic echo acquired by the ultrasonic probe 101, generates an ultrasonic image to be stored in the memory 103, and obtains information from the ultrasonic image that has been stored in the memory 103 during the image acquisition step. An ultrasound image of the anatomical structure of the biological body, and the ultrasound image and the processing result are displayed on the display 104. In one embodiment, in the image acquisition step, at least one ultrasound image about the anatomical structure is acquired. In one embodiment, the display 104 displays one or more ultrasound images about the anatomical structure of the biological tissue. For example, two ultrasound images about the head of the fetus are displayed on the display 104 at the same time. In one embodiment, ultrasound images related to different anatomical structures of the biological tissue are displayed on the display 104, for example, ultrasound images related to the head of the fetus and the abdomen of the fetus are simultaneously displayed on the display 104. In one embodiment, the display 104 has multiple display windows, and each display window displays one or more ultrasound images about the anatomy. For example, there are two display windows on the display 104, one of which displays information about the fetus. Ultrasound image of the head, one of the display windows shows the ultrasound image of the fetus' abdomen. In one embodiment, while the ultrasound image is displayed on the display 104, measurement items and measurement results related to the anatomical structure may also be displayed.
步骤S12:测量项获取步骤,获取待测测量项。在测量项获取步骤(S12)中,获取关于生物体组织的解剖结构需要测量的测量项。Step S12: a measurement item obtaining step, obtaining a measurement item to be tested. In the measurement item acquisition step (S12), the measurement items that need to be measured regarding the anatomical structure of the biological tissue are acquired.
在一个实施例中,在显示器104上显示生物体组织的至少一个解剖结构中的每一个所具有的测量项。在一个实施例中,处理器102通过接收到用户输入的待测测量项进行测量项获取步骤。在一个实施例中,处理器102连接输入设备,用户通过输入设备输入的指令从显示器104上选择待测测量项。例如,在进行产科超声检测时,在显示器104上显示出对孕妇腹部扫描过程中胎儿头部、胎儿腹部以及胎盘等各个结构所具有的测量项,包括:双顶径、枕额径、头围、腹围、股骨长、肱骨长、胎盘厚度、腹部横径、腹部厚径、颈褶等测量项。用户需要对胎儿的股骨长这一测量项进行测量,通过与处理器102通讯连接的输入设备输入测量股骨长这一指令,处理器102接收到用户需要测量股骨长的指令,从而获取到待测测量项为股骨长。In one embodiment, the measurement items of each of at least one anatomical structure of the biological tissue are displayed on the display 104. In an embodiment, the processor 102 performs the measurement item acquisition step by receiving the measurement item to be measured input by the user. In one embodiment, the processor 102 is connected to an input device, and the user selects the measurement item to be measured from the display 104 through an instruction input by the input device. For example, when performing obstetric ultrasound testing, the display 104 displays the measurement items of the fetal head, fetal abdomen, and placenta during the process of scanning the pregnant woman's abdomen, including: biparietal diameter, occipital frontal diameter, and head circumference , Abdominal circumference, femur length, humerus length, placenta thickness, abdominal transverse diameter, abdominal thickness diameter, neck folds and other measurement items. The user needs to measure the length of the femur of the fetus, and input the instruction to measure the length of the femur through the input device communicatively connected with the processor 102. The processor 102 receives the instruction that the user needs to measure the length of the femur, thereby obtaining the measurement item to be measured. The measurement item is the length of the femur.
在一个实施例中,所述解剖结构具有一个测量项,在测量项获取步骤(S12)中获取该测量项作为所述待测测量项。在一个实施例中,所述解剖结构具有两个或多个测量项,在测量项获取步骤(S12)中可以单独获取所述两个或多个测量项中的至少一个作为所述待测测量项,也可以同时所述 两个或多个测量项中的两个或者多个作为所述作为待测量的测量项。例如,在进行产科超声检测时,图像获取步骤中获取的关于胎儿的超声图像,胎儿具有双顶径、头围、腹围、股骨长等测量项,在测量项获取步骤(S12)中,用户通过与处理器102通讯连接的输入设备输入测量双顶径从而获得双顶径这一待测测量项,或者用户通过与处理器102通讯连接的输入同时测量双顶径、头围和股骨长这一指令,从而同时获取双顶径、头围、股骨长三项待测测量项。In one embodiment, the anatomical structure has a measurement item, which is acquired as the measurement item to be measured in the measurement item acquisition step (S12). In one embodiment, the anatomical structure has two or more measurement items, and at least one of the two or more measurement items can be separately obtained as the measurement to be measured in the measurement item obtaining step (S12). Item, two or more of the two or more measurement items may also be used as the measurement items to be measured at the same time. For example, during the obstetric ultrasound examination, the ultrasound image about the fetus acquired in the image acquisition step, the fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femur length. In the measurement item acquisition step (S12), the user The dual parietal diameter is measured by inputting the input device in communication with the processor 102 to obtain the dual parietal diameter to be measured, or the user can simultaneously measure the dual parietal diameter, head circumference, and femoral length through the input in communication with the processor 102. One instruction, so as to obtain the three measurement items of double parietal diameter, head circumference, and femur length at the same time.
步骤S13:图像识别步骤,从所述超声图像中识别出包含有所述待测测量项对应的测量目标的识别图像,所述识别图像为所述超声图像中的至少一个图像。在图像识别步骤(S13)中,对图像获取步骤中获取的是关于解剖结构的超声图像,其中是否包含用户指定的待测测量项对应的测量目标进行识别,包含有用户指定的待测测量项对应的测量目标为识别图像,不包含有用户指定的待测测量项对应的测量目标的超声图像则舍弃,不进入下述自动测量步骤。Step S13: an image recognition step, recognizing a recognition image containing a measurement target corresponding to the measurement item to be measured from the ultrasound image, and the recognition image is at least one of the ultrasound images. In the image recognition step (S13), the ultrasound image about the anatomical structure acquired in the image acquisition step is identified, whether it contains the measurement target corresponding to the measurement item specified by the user, and the measurement item specified by the user is included The corresponding measurement target is the recognition image, and the ultrasonic image that does not contain the measurement target corresponding to the measurement item designated by the user is discarded, and the following automatic measurement step is not entered.
在超声检测中,由于对生物体组织进行超声检测时,往往需要用户通过探头先对生物体组织进行超声回波的采集,经过超声系统处理获得多个解剖结构的超声图像之后,再选择解剖结构的测量项作为待测测量项进行测量,其中,所述超声图像可以是切面图像或者三维图像。例如,在医生在对生物体组织进行二维超声成像和测量的过程中,先打好所有组织的切面之后再进行统一测量或者在检测过程中将打好的组织切面显示在不同的窗口中以进行自动测量。传统测量方法中,需要用户手动识别包含待测测量项对应的测量目标的超声图像后进行测量,例如,手动切换窗口或者选择激活窗口进行测量,这一过程增加用户操作。根据本实施例,通过图像识别步骤(S13)自动识别包含有用户指定的待测测量项对应的测量目标的识别图像,而不需要用户手动识别包含有待测测量项对应的测量目标的超声图像,对自动识别的图像进行自动测量,减少用户操作,提升测量效率。In ultrasonic testing, because of the ultrasonic testing of biological tissues, the user is often required to collect the ultrasonic echo of the biological tissues through the probe, and then select the anatomical structure after the ultrasonic image is processed by the ultrasonic system to obtain multiple anatomical structures. The measurement item of is used as the measurement item to be measured for measurement, wherein the ultrasound image may be a cross-sectional image or a three-dimensional image. For example, in the process of two-dimensional ultrasound imaging and measurement of biological tissues, the doctor first lays out the cut surfaces of all tissues and then performs unified measurement, or displays the cut surfaces of the tissues in different windows during the inspection process. Take automatic measurements. In the traditional measurement method, the user is required to manually identify the ultrasound image containing the measurement target corresponding to the measurement item to be measured and then perform the measurement. For example, manually switch the window or select the active window to perform the measurement. This process increases user operations. According to this embodiment, the image recognition step (S13) automatically recognizes the recognition image containing the measurement target corresponding to the measurement item to be measured specified by the user, without requiring the user to manually recognize the ultrasonic image containing the measurement target corresponding to the measurement item to be measured , Automatically measure the automatically recognized image, reduce user operations and improve measurement efficiency.
例如,在进行产科超声检测时,在图像获取步骤中,获取的是关于胎儿的超声图像,胎儿具有双顶径、头围、股骨长三项待测测量项,其中,双顶径对应胎儿头部两侧顶骨这一测量目标,和头围对应胎儿头部枕骨到前额鼻根这一测量目标,股骨长对应胎儿股骨这一测量目标。在测量项获 取步骤中,获取用户需要对胎儿的股骨长这一测量项进行测量,而关于胎儿的超声图像中包含有包括头部的超声图像和包括股骨的超声图像,在图像识别步骤中需要对包括头部的超声图像和包括股骨的超声图像进行识别,以识别出包括股骨的超声图像,对识别的包括股骨的超声图像进行后续的自动测量步骤。For example, when performing obstetric ultrasound testing, in the image acquisition step, the ultrasound image about the fetus is acquired. The fetus has three measurement items to be measured: double parietal diameter, head circumference, and femoral length. Among them, the double parietal diameter corresponds to the fetal head The measurement target of the parietal bones on both sides of the fetus and the head circumference correspond to the measurement target of the occipital bone of the fetal head to the forehead nasal root, and the length of the femur corresponds to the measurement target of the fetal femur. In the measurement item acquisition step, the acquisition user needs to measure the fetal femur length, and the ultrasound image about the fetus contains the ultrasound image including the head and the ultrasound image including the femur, which is required in the image recognition step The ultrasonic image including the head and the ultrasonic image including the femur are recognized to identify the ultrasonic image including the femur, and the subsequent automatic measurement step is performed on the recognized ultrasonic image including the femur.
在一个实施例中,所述识别图像为超声图像的一部分。例如,可以是某个超声图像的部分区域。在进行产科超声检测时,在图像获取步骤中,获取的是关于胎儿头部的超声图像,胎儿头部具有双顶径、头围两项待测测量项,其中,双顶径对应胎儿头部两侧顶骨这一测量目标和头围对应胎儿头部枕骨到前额鼻根这一测量目标,在测量项获取步骤中获取双顶径这一测量项,在图像识别步骤中,识别出来的超声图像中仅仅包括胎儿头部两侧顶骨的区域。在一个实施例中,在测量项获取步骤(S12)中获取一个或者多个待测测量项,在图像识别步骤(S13)识别多个识别图像,其中,至少两个识别图像包含有同一待测测量项对应的测量目标,则在后续的自动测量步骤中对每一个识别图像中的所述待测测量项对应的测量目标均进行测量,对每一个识别图像中的测量目标进行测量后的测量结果取平均值,该平均值为针对该待测测量项的测量结果。例如,在进行产科超声检测时,在图像获取步骤中,获取的是关于胎儿的多个超声图像,在测量项获取步骤中,获取用户需要对胎儿的股骨长这一测量项进行测量,在图像识别步骤中从多个超声图像中识别出两个均包括股骨的超声图像,这两者均包括股骨的超声图像均为识别图像,在后续对识别的包括股骨的超声图像进行自动测量的自动测量步骤中,对两个识别图像均进行测量,并对对两个识别图像进行测量后的测量结果取平均值,该平均值为针对该待测测量项的测量结果。当然,在其他实现方式中,除了求平均值外,也可以是通过加权或者求方差等方式确定最终的测量结果,此处不做具体限定。在一个实施例中,图像获取步骤(S11)中获取至少两个关于生物体组织的解剖结构的超声图像;测量项获取步骤(S12)获取前述解剖结构的一个或者多个测量项;图像识别步骤(S13)中根据所述生物体组织的所述至少一个解剖结构中的每一个所具有的测量项将超声图像进行分类,以获得包含有测量项对应的测量目标的超声图像;从经过上述分类的超声图像中选择具有所述 待测测量项对应的测量目标的超声图像。In one embodiment, the recognition image is a part of an ultrasound image. For example, it may be a partial area of a certain ultrasound image. When performing obstetric ultrasound testing, in the image acquisition step, the ultrasound image about the fetal head is acquired. The fetal head has two measurement items to be measured: double parietal diameter and head circumference. Among them, the double parietal diameter corresponds to the fetal head The measurement target of the parietal bones on both sides and the head circumference correspond to the measurement target of the occipital bone of the fetus head to the root of the forehead and nose. The measurement item of double parietal diameter is obtained in the measurement item acquisition step, and the ultrasonic image recognized in the image recognition step Only the area of the parietal bones on both sides of the fetus’s head is included. In one embodiment, one or more measurement items to be measured are obtained in the measurement item acquisition step (S12), and multiple recognition images are recognized in the image recognition step (S13), wherein at least two recognition images contain the same measurement item to be tested. For the measurement target corresponding to the measurement item, in the subsequent automatic measurement step, the measurement target corresponding to the measurement item to be measured in each recognition image is measured, and the measurement target in each recognition image is measured. The results are averaged, and the average is the measurement result for the measurement item to be tested. For example, when performing obstetric ultrasound testing, in the image acquisition step, multiple ultrasound images about the fetus are acquired. In the measurement item acquisition step, the user needs to measure the fetal femur length. In the recognition step, two ultrasound images including the femur are identified from multiple ultrasound images, and both of the ultrasound images including the femur are recognized images, and then the recognized ultrasound images including the femur are automatically measured. In the step, the two identification images are measured, and the measurement results of the two identification images are averaged, and the average value is the measurement result for the measurement item to be measured. Of course, in other implementation manners, in addition to calculating the average value, the final measurement result may also be determined by weighting or calculating the variance, which is not specifically limited here. In one embodiment, at least two ultrasound images related to the anatomical structure of the biological tissue are acquired in the image acquisition step (S11); the measurement item acquisition step (S12) acquires one or more measurement items of the aforementioned anatomical structure; the image recognition step In (S13), the ultrasound images are classified according to the measurement items of each of the at least one anatomical structure of the biological tissue, so as to obtain the ultrasound image containing the measurement target corresponding to the measurement item; Select the ultrasound image with the measurement target corresponding to the measurement item to be measured from the ultrasound images.
在一个实施例中,前述对超声图像进行分类的步骤包括:将所述超声图像的图像特征与预设数据库中的数据库图像的图像特征进行对比,其中,所述数据库图像包含有至少一个所述生物体组织的至少一个所述解剖结构的任意一个所具有的测量项对应的测量目标,当所述超声图像的图像特征与所述数据库图像的图像特征匹配时,所述超声图像包含有所述数据库图像包含的测量目标。In one embodiment, the foregoing step of classifying the ultrasound image includes: comparing the image feature of the ultrasound image with the image feature of the database image in a preset database, wherein the database image contains at least one of the The measurement target corresponding to the measurement item of at least one of the anatomical structures of the biological tissue, when the image feature of the ultrasound image matches the image feature of the database image, the ultrasound image contains the The measurement target contained in the database image.
例如,在产科超声检测时,图像获取步骤(S11)中获取多个超声图像,包括胎儿头部超声图像,胎儿腹部超声图像。对胎儿头部具有的测量项包括:双顶径、头围等,胎儿腹部具有的测量项包括:腹围、腹部横径、腹部厚径等。在测量项获取步骤(S12)中获取的待测测量项为头围,在图像识别步骤(S13)中根据胎儿头部所具有的测量项和胎儿腹部所具有的测量项对图像获取步骤(S11)获取的多个超声图像进行分类,获得分别包含有双顶径对应的胎儿头部两侧顶骨区域的这一测量目标和头围对应胎儿头部枕骨到前额鼻根区域的这一测量目标,和包含有腹围、腹部横径、腹部厚径所对应的腹部这一测量目标的超声图像。具体的,将多个超声图像中的每一个与预设数据库中数据库图像的图像特征进行对比,数据库图像至少包含测量目标(如头部两侧顶骨、头部枕骨到前额鼻根和腹部)中的一项,当超声图像的图像特征与所述数据库图像的图像特征匹配时,所述超声图像对应于所述数据库图像包含的测量目标。在前述步骤中,对图像获取步骤(S11)获取的多个超声图像进行分类后,再直接根据测量项获取步骤(S12)中获取的待测测量项为头围,从分类后的超声图像中获得包含头部枕骨到前额鼻根区域的超声图像为识别图像。在一个实施例中,在显示器104上显示图像识别步骤(S13)中对图像获取步骤(S11)获取的多个超声图像进行分类后的包含有对应于不同测量项的测量目标的超声图像。For example, in obstetric ultrasound inspection, multiple ultrasound images are acquired in the image acquisition step (S11), including ultrasound images of the fetus head and ultrasound images of the fetus abdomen. The measurement items of the fetal head include: double parietal diameter, head circumference, etc., and the measurement items of the fetal abdomen include: abdominal circumference, abdominal transverse diameter, and abdominal thickness diameter. The measurement item to be measured obtained in the measurement item acquisition step (S12) is the head circumference. In the image recognition step (S13), the image acquisition step (S11) is based on the measurement items on the head of the fetus and the measurement items on the abdomen of the fetus. ) The acquired multiple ultrasound images are classified, and the measurement target containing the parietal regions on both sides of the fetal head corresponding to the double parietal diameter and the measurement target of the head circumference corresponding to the occipital bone of the fetal head to the forehead nasal root region are obtained. And an ultrasound image containing the measurement target of the abdomen corresponding to the abdominal circumference, the transverse diameter of the abdomen, and the thickness of the abdomen Specifically, each of the multiple ultrasound images is compared with the image features of the database image in the preset database. The database image contains at least the measurement targets (such as the parietal bones on both sides of the head, the occipital bones to the forehead, nose, and abdomen). In one item, when the image feature of the ultrasound image matches the image feature of the database image, the ultrasound image corresponds to the measurement target contained in the database image. In the foregoing steps, after the multiple ultrasound images acquired in the image acquisition step (S11) are classified, the measurement item to be measured acquired in the measurement item acquisition step (S12) is directly used as the head circumference, and from the classified ultrasound images Obtaining an ultrasound image from the occipital bone of the head to the root of the forehead and nose is the recognition image. In one embodiment, the multiple ultrasound images acquired in the image acquisition step (S11) are classified in the image recognition step (S13) on the display 104 and the ultrasound images containing measurement targets corresponding to different measurement items are displayed.
在一个实施例中,在显示器104上显示图像识别步骤(S13)中识别出来的识别图像。在一个实施例中,处理器102连接输入设备,用户通过输入设备输入的指令从显示器104上选择待测测量项,处理器102根据用户选择的待测测量项对图像获取步骤(S11)获取的多个超声图像执行图像识别步骤(S13)后,将识别出来的识别图像显示在显示器104上。在一个实 施例中,显示器104上显示多个超声图像,同时,将识别图像以区别其他超声图像的方式进行显示,例如以高亮的方式显示出来。例如,在进行产科超声检测时,在显示器104上显示出在图像获取步骤(S11)的关于胎儿的两个超声图像,其中一张包含头胎儿头部枕骨到前额鼻根区域,一张包含胎儿腹部,根据用户选择对胎儿的头围这一测量项进行测量的指令,处理器102根据用户选择的测量头围进行图像识别步骤(S13)后,将识别出来的包含头部枕骨到前额鼻根区域这一测量目标的识别图像以突出区别于包含腹部的超声图像的方式显示在显示器104上。In one embodiment, the recognition image recognized in the image recognition step (S13) is displayed on the display 104. In one embodiment, the processor 102 is connected to the input device, the user selects the measurement item to be measured from the display 104 through the instruction input by the input device, and the processor 102 obtains the image acquisition step (S11) according to the measurement item to be measured selected by the user. After performing the image recognition step (S13) for multiple ultrasound images, the recognized recognition images are displayed on the display 104. In one embodiment, multiple ultrasound images are displayed on the display 104, and at the same time, the recognition images are displayed in a manner that distinguishes other ultrasound images, for example, displayed in a highlighted manner. For example, during the obstetric ultrasound examination, the two ultrasound images of the fetus in the image acquisition step (S11) are displayed on the display 104, one of which contains the fetal head and the occipital bone to the forehead and nasal root region, and the other contains the fetus. For the abdomen, according to the user's instruction to measure the head circumference of the fetus, the processor 102 performs the image recognition step (S13) according to the measurement head circumference selected by the user, and then converts the identified head occipital bone to the forehead nasal root. The recognition image of the measurement target of the area is displayed on the display 104 in such a way that it is distinguished from the ultrasound image including the abdomen.
在一个实施例中,图像识别步骤(S13)包括:将超声图像的图像特征与预设数据库中包含待测测量项的数据库图像的图像特征进行对比,判断超声图像的图像特征与数据库图像的图像特征是否匹配,当超声图像的图像特征与数据库图像的图像特征匹配时,确定这一超声图像为包含有待测测量项对应的测量目标的识别图像。预设数据库中包括的数据库图像是针对解剖结构的所具有的测量项进行标定了的图像,其包含有解剖结构具有的测量项对应的测量目标。In one embodiment, the image recognition step (S13) includes: comparing the image feature of the ultrasound image with the image feature of the database image containing the measurement item to be measured in the preset database, and judging the image feature of the ultrasound image and the image of the database image Whether the feature matches, when the image feature of the ultrasound image matches the image feature of the database image, it is determined that this ultrasound image is a recognition image containing the measurement target corresponding to the measurement item to be measured. The database image included in the preset database is an image calibrated for the measurement items of the anatomical structure, and it contains measurement targets corresponding to the measurement items of the anatomical structure.
在一个实施例中,图像获取步骤(S11)中获取至少两个关于生物体组织的解剖结构的超声图像;测量项获取步骤(S12)获取前述解剖结构的一个测量项;图像识别步骤(S13)中将图像获取步骤中获取的每一个超声图像与预设数据库中的包含前述测量项对应的测量目标的数据库图像进行对比,判断当前对比的超声图像的图像特征与数据库图像的图像特征是否匹配,如果匹配,则确定当前对比该超声图像为识别图像,如果不匹配,则舍弃该超声图像,从而从多个超声图像中确定出包含有待测测量项对应的测量目标的识别图像。In one embodiment, at least two ultrasound images related to the anatomical structure of the biological tissue are acquired in the image acquisition step (S11); the measurement item acquisition step (S12) acquires one measurement item of the aforementioned anatomical structure; the image recognition step (S13) Compare each ultrasound image acquired in the image acquisition step with the database image of the measurement target corresponding to the aforementioned measurement item in the preset database, and determine whether the image feature of the currently compared ultrasound image matches the image feature of the database image, If it matches, it is determined that the currently compared ultrasound image is an identification image; if it does not match, the ultrasound image is discarded, thereby determining the identification image containing the measurement target corresponding to the measurement item to be measured from the multiple ultrasound images.
例如,在腹部超声检测时,图像获取步骤(S11)中获取两个关于人体脏器的超声图像,人体脏器包括肝脏、肾脏等,其中肝脏具有肝脏尺寸的测量项,肾脏具有肾脏尺寸的测量项,在测量项获取步骤(S12)中获取的待测测量项为肝脏尺寸,在图像识别步骤(S13)中将图像获取步骤(S11)获取的人体脏器的两个超声图像中的每一个与预设数据库中包含肝脏的数据库图像(也是关于肝脏的超声图像)进行对比,判断超声图像的图像特征与数据库图像的图像特征是否匹配,若匹配,则将其确定为识别图像, 若不匹配,则将抛弃。在一个实施例中,图像获取步骤(S11)中获取两个或者多个关于生物体组织的解剖结构的超声图像,生物体组织具有一个或者多个解剖结构,一个或多个解剖结构中的一个具有两个或者多个测量项;测量项获取步骤(S12)获取的待测测量项是前述解剖结构具有的两个或者多个测量项中的两个或者多个;图像识别步骤(S13)还包括:对识别图像进行分类,以获得包含有每一待测测量项对应的测量目标的识别图像。由于生物体组织的解剖结构往往具有多个测量项,且需要同时对多个测量项进行测量,在图像识别步骤(S13)中在将超声图像的图像特征与数据库图像的图像特征进行匹配的过程中还需要根据待测测量项对识别图像进行分类,以得到每一个待测测量项对应的测量目标的识别图像。For example, during abdominal ultrasound detection, in the image acquisition step (S11), two ultrasound images of human organs are acquired. Human organs include liver, kidney, etc., where the liver has a measurement item for liver size, and the kidney has a measurement for kidney size. Item, the measurement item to be measured acquired in the measurement item acquisition step (S12) is the size of the liver, and in the image recognition step (S13), each of the two ultrasound images of the human organs acquired in the image acquisition step (S11) Compare with the database image containing the liver in the preset database (also the ultrasound image about the liver), determine whether the image feature of the ultrasound image matches the image feature of the database image, if it matches, it will be determined as the recognition image, if it does not match , It will be discarded. In one embodiment, in the image acquisition step (S11), two or more ultrasound images related to the anatomical structure of the biological tissue are acquired, the biological tissue has one or more anatomical structures, and one of the one or more anatomical structures There are two or more measurement items; the measurement item to be measured obtained in the measurement item acquisition step (S12) is two or more of the two or more measurement items of the aforementioned anatomical structure; the image recognition step (S13) also Including: classifying the recognition image to obtain the recognition image containing the measurement target corresponding to each measurement item to be measured. Since the anatomical structure of biological tissue often has multiple measurement items, and multiple measurement items need to be measured at the same time, in the image recognition step (S13), the process of matching the image characteristics of the ultrasound image with the image characteristics of the database image It is also necessary to classify the recognition images according to the measurement items to be tested to obtain the recognition image of the measurement target corresponding to each measurement item to be tested.
例如,在产科超声检测时,图像获取步骤(S11)中获取两个或者多个关于胎儿的超声图像,胎儿具有双顶径、头围、腹围、股骨长等测量项,在测量项获取步骤(S12)中获取的待测测量项为头围和腹围,在图像识别步骤(S13)中将图像获取步骤(S11)获取的胎儿的两个或多个胎儿的超声图像中的每一个分别与预设数据库中包含头部和腹部的数据库图像(也是关于胎儿的超声图像)进行对比,其中,在对比过程中还根据包含头部的数据库图像的图像特征和包含腹部的数据库图像的图像特征对识别图像进行分类,以确定对应于头围这一测量项的头部识别图像和对应于腹围这一测量项的腹部识别图像。For example, in the obstetric ultrasound examination, two or more ultrasound images about the fetus are acquired in the image acquisition step (S11). The fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femoral length. The measurement items to be measured acquired in (S12) are head circumference and abdominal circumference. In the image recognition step (S13), each of the ultrasound images of two or more fetuses of the fetus acquired in the image acquisition step (S11) is separately Compare with the database image of the head and abdomen (also about the ultrasound image of the fetus) in the preset database, where the comparison process is also based on the image features of the database image containing the head and the image feature of the database image containing the abdomen The recognition images are classified to determine the head recognition image corresponding to the measurement item of head circumference and the abdomen recognition image corresponding to the measurement item of abdominal circumference.
在一个实施例中,采用机器学习算法学习预设数据库中的数据库图像的可以区分不同测量项的图像特征,同时,采用机器学习方法提取图像获取步骤(S11)获取的超声图像的图像特征,将学习到的数据库图像的图像特征与超声图像的图像特征进行匹配,获得与学习到的图像特征相匹配的超声图像为识别图像。在具有多个待测测量项的情况下,根据学习到的可以区分不同测量项的图像特征对超声图像进行分类,实现根据解剖结构所具有的测量项对超声图像进行分类,从而将与每一待测测量项相对应的识别图像识别出来。其中,机器学习算法提取特征的方法包括但不限于主成分分析(Principle Components Analysis,PCA)方法、线性评判分析(Linear Discriminant Analysis,LDA)方法、Harr特征提取方法、纹理特征提取方法等。将机器学习算法提取到的超声图像的图像特征与预设数据库中的图 像特征进行匹配,以对超声图像进行分类,采用的分类判别器包括但不限于最近邻分类(K nearest neighbor,KNN)、支持向量机(Suport vector maehine,SVM)随机森林、神经网络等判别器。In one embodiment, a machine learning algorithm is used to learn the image features of the database images in the preset database that can distinguish different measurement items. At the same time, the machine learning method is used to extract the image features of the ultrasound image acquired in the image acquisition step (S11), and The learned image feature of the database image is matched with the image feature of the ultrasound image, and the ultrasound image matching the learned image feature is obtained as the recognition image. In the case of multiple measurement items to be measured, the ultrasound images are classified according to the learned image characteristics that can distinguish different measurement items, so as to classify the ultrasound images according to the measurement items of the anatomical structure, so that the ultrasound images are classified with each other. The corresponding recognition image of the measurement item to be tested is recognized. Among them, the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc. Match the image features of the ultrasound images extracted by the machine learning algorithm with the image features in the preset database to classify the ultrasound images. The classification discriminators used include but are not limited to nearest neighbor (KNN), Support vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
在一个实施例中,采用深度学习方法,构建堆叠卷积层和全连接层对预设数据库中的数据库图像的图像特征进行学习,学习出能够将不同测量项进行区分的图像特征,根据这些图像特征识别超声图像中的识别图像。在具有至少两个待测测量项的情况下,根据学习到的图像特征对超声图像进行分类,将超声图像中具有前述能够区分测量项的图像识别出来,识别出来的图像为识别图像。深度学习方法包括但不限于,VGG网络,ResNet残差网络,Inception模块,AlexNe t深度网络等。In one embodiment, a deep learning method is used to construct a stacked convolutional layer and a fully connected layer to learn image features of database images in a preset database, and learn image features that can distinguish different measurement items. According to these images Feature recognition is the recognition image in the ultrasound image. In the case of having at least two measurement items to be measured, the ultrasound images are classified according to the learned image characteristics, the images with the aforementioned measurement items that can be distinguished in the ultrasound images are recognized, and the recognized images are the recognition images. Deep learning methods include, but are not limited to, VGG network, ResNet residual network, Inception module, AlexNe ot deep network, etc.
步骤S14:自动测量步骤,测量所述识别图像中的测量目标。在自动测量步骤(S14)中,对识别图像中的测量目标进行测量,获得解剖结构的测量结果。在超声检测中,对不同测量项进行测量时,其测量方法是不一样的,例如在产科超声检测时,头围的测量通常用一个椭圆来包裹住胎儿的颅骨光环,腹围采用椭圆来包裹住胎儿的腹部,股骨长采用线段测量股骨两端之间的距离。在自动测量步骤(S14)中采用目标拟合方法进行自动测量。Step S14: an automatic measurement step, measuring the measurement target in the recognition image. In the automatic measurement step (S14), the measurement target in the recognition image is measured to obtain the measurement result of the anatomical structure. In ultrasonic testing, when measuring different measurement items, the measurement methods are different. For example, in obstetric ultrasound testing, the measurement of head circumference usually uses an ellipse to wrap the fetal skull halo, and the abdominal circumference uses an ellipse to wrap Live the abdomen of the fetus, and measure the distance between the two ends of the femur with a line segment for the femur length. In the automatic measurement step (S14), the target fitting method is used for automatic measurement.
参看图3示出了根据本申请一个实施例的自动测量步骤的流程图。如图3所示,自动测量步骤(S14)包括:Referring to FIG. 3, a flowchart of an automatic measurement step according to an embodiment of the present application is shown. As shown in Figure 3, the automatic measurement step (S14) includes:
步骤S141:通过边缘检测算法提取所述待测测量项对应的测量目标的轮廓。边缘检测算法包括但不限于:采用Sobel算子、canny算子等根据超声图像的像素点、灰度加权值等检测待测测量项的轮廓。Step S141: Extract the contour of the measurement target corresponding to the measurement item to be measured by using an edge detection algorithm. Edge detection algorithms include, but are not limited to: using Sobel operator, canny operator, etc., to detect the contour of the measurement item to be measured based on the pixel points and gray-scale weighted values of the ultrasound image.
步骤S142:对所述待测测量项对应的测量目标的轮廓进行拟合,以获得所述待测测量项对应的拟合方程。采用直线、圆、椭圆等检测算法对待测测量项的轮廓进行拟合得到拟合方程,拟合算法包括但不限于最小二乘估计、Hough变换、Randon变换、Ransac等算法。Step S142: Fit the contour of the measurement target corresponding to the measurement item to be measured to obtain a fitting equation corresponding to the measurement item to be measured. Detecting algorithms such as straight lines, circles, ellipses, etc. are used to fit the contours of the measurement items to be measured to obtain the fitting equations. Fitting algorithms include but are not limited to least squares estimation, Hough transform, Randon transform, Ransac and other algorithms.
步骤S143:通过所述拟合方程确定所述测量结果。根据上述步骤中拟合算法得到的拟合方程确定测量结果。如果上述步骤中得到的拟合方程式圆或者椭圆的方程,其为自动测量的结果。如果上述步骤中得到的拟合方程为直线,可结合待测测量项的端点的灰度变化进一步定位端点,实现自 动测量。以产科超声检测中的测量股骨长为例,股骨表现为高亮的线状结构,检测到股骨所在的直线后,可在直线上检测两个股骨灰度梯度最大的点作为股骨的两个端点。Step S143: Determine the measurement result through the fitting equation. The measurement result is determined according to the fitting equation obtained by the fitting algorithm in the above steps. If the fitting equation circle or ellipse equation is obtained in the above steps, it is the result of automatic measurement. If the fitting equation obtained in the above steps is a straight line, the end point can be further located by combining the gray change of the end point of the measurement item to be measured to realize automatic measurement. Taking the measurement of femur length in obstetric ultrasound testing as an example, the femur appears as a bright linear structure. After detecting the straight line where the femur is located, two points with the largest gray gradient of the femur can be detected on the straight line as the two end points of the femur.
在一个实施例中,在显示器104上显示自动测量步骤(S14)中测量的测量结果。在一个实施例中,处理器102对图像识别步骤(S13)中识别出来的识别图像执行自动测量步骤(S14)后,将测量结果显示在在显示器104显示出来的识别图像之上。例如,在进行产科超声检测时,处理器102根据用户选择的测量头围进行图像识别步骤(S13)后,将识别出来的包含有头围这一测量项对应的头部枕骨到前额鼻根区域这一测量目标的识别图像以突出区别于包含双顶径这一测量项对应的胎儿头部两侧顶骨区域的这一测量目标的超声图像的方式显示在显示器104上,此后的自动测量步骤(S14)中获得头围的具体数值显示在图像识别步骤(S13)中识别的具有头围这一测量项的识别图像的右上角。In one embodiment, the measurement result measured in the automatic measurement step (S14) is displayed on the display 104. In one embodiment, the processor 102 performs an automatic measurement step (S14) on the recognition image recognized in the image recognition step (S13), and displays the measurement result on the recognition image displayed on the display 104. For example, when performing an obstetric ultrasound inspection, the processor 102 performs an image recognition step (S13) according to the measurement head circumference selected by the user, and then recognizes the head occipital bone to the forehead nasal root region corresponding to the measurement item including the head circumference. The recognition image of this measurement target is displayed on the display 104 in a way that highlights the ultrasonic image of the measurement target that is different from the parietal region on both sides of the fetal head corresponding to the measurement item of double parietal diameter, and the subsequent automatic measurement step ( The specific numerical value of the head circumference obtained in S14) is displayed in the upper right corner of the recognition image with the measurement item of head circumference recognized in the image recognition step (S13).
参看图4对根据本申请的一个实施例的处理器进行解剖结构的自动测量方法提供在基于用户指定了待测测量项的情况下,对解剖结构进行自动测量的方法进行示例性介绍。在本实施例中,提供在基于用户指定了待测测量项的情况下,对解剖结构进行自动测量的方法。根据用户指定的待测测量项,从获取的解剖结构的超声图像中自动识别出包含有待测测量项对应的测量目标的识别图像后直接对识别图像中的测量目标进行测量。整个过程中,仅仅需要用户指定需要测量的待测测量项,不需要用户根据待测测量项识别包含待测测量项对应的测量目标的超声图像,更不需要对超声图像中的测量目标进行手动测量,简化操作过程,提高了对解剖结构进行测量的效率。与图2中示出的实施例所不同之处在于,在本实施例中,在图像识别步骤之后增加定位步骤,消除在测量步骤中测量目标周围结构对测量结果的影响。下面参看图4,对本实施例的解剖结构的自动测量方法进行示例性说明。Referring to FIG. 4, the method for automatically measuring anatomical structures by a processor according to an embodiment of the present application provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating a measurement item to be measured. In this embodiment, a method for automatically measuring an anatomical structure based on the user designating a measurement item to be measured is provided. According to the measurement item specified by the user, the acquired ultrasound image of the anatomical structure automatically recognizes the recognition image containing the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement target in the recognition image. In the whole process, the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image. The measurement simplifies the operation process and improves the efficiency of measuring anatomical structures. The difference from the embodiment shown in FIG. 2 is that in this embodiment, a positioning step is added after the image recognition step to eliminate the influence of the surrounding structure of the measurement target on the measurement result in the measurement step. Next, referring to FIG. 4, the automatic measurement method of the anatomical structure of this embodiment will be exemplified.
参看图4,示出了根据本申请的一个实施例的解剖结构的自动测量方法的示意性流程图,其中,图像识别步骤(S21)、测量项获取步骤(S22)以及图像识别步骤(S23)与图2中示出图像识别步骤(S11)、测量项获 取步骤(S12)以及图像识别步骤(S13)一致,所不同之处在于,在图像识别步骤(S23)之后增加定位步骤(S24),在自动测量步骤中测量测量目标(S25)。下面对图4中示出定位步骤(S24)和自动测量步骤中测量测量目标进行详细描述。Referring to FIG. 4, a schematic flowchart of an automatic measurement method of anatomical structure according to an embodiment of the present application is shown, in which an image recognition step (S21), a measurement item acquisition step (S22), and an image recognition step (S23) are shown. It is consistent with the image recognition step (S11), the measurement item acquisition step (S12) and the image recognition step (S13) shown in FIG. 2, except that the positioning step (S24) is added after the image recognition step (S23), The measurement target is measured in the automatic measurement step (S25). The measurement target in the positioning step (S24) and the automatic measurement step shown in FIG. 4 will be described in detail below.
步骤S24:定位步骤,在所述识别图像中对所述测量目标进行定位。在上述图像识别步骤(S13)中,仅仅获得了包含有待测测量项对应的测量目标的识别图像,而不知道在实际测量中对应于上述测量目标的位置,直接对识别图像中的测量目标进行测量需要对整个图像进行检测,获得的边缘检测结果容易受到测量目标周围的结构的影响。为此,在定位步骤(S24)中,对测量目标进行定位,再在自动测量步骤中对测量目标进行目标拟合,可以减少测量项周围结构的影响,使测量结果更加准确。Step S24: a positioning step, positioning the measurement target in the recognition image. In the above image recognition step (S13), only the recognition image containing the measurement target corresponding to the measurement item to be measured is obtained, and the position corresponding to the measurement target in the actual measurement is not known, and the measurement target in the recognition image is directly determined. The measurement requires the detection of the entire image, and the obtained edge detection result is easily affected by the structure around the measurement target. For this reason, in the positioning step (S24), the measurement target is positioned, and then the measurement target is fitted to the measurement target in the automatic measurement step, which can reduce the influence of the surrounding structure of the measurement item and make the measurement result more accurate.
在一个实施例中,在定位步骤(S24)中将所述识别图像的图像特征与预设数据库中的包含有所述待测测量项对应的测量目标的数据库图像的图像特征进行对比分析,以对所述识别图像中的测量目标进行定位获得所述测量目标,其中,所述数据库图像包含有对应于所述测量目标的标定结果,所述测量目标为与所述标定结果相一致的区域。In one embodiment, in the positioning step (S24), the image feature of the recognition image is compared and analyzed with the image feature of the database image containing the measurement target corresponding to the measurement item to be measured in the preset database, so as to The measurement target is obtained by locating the measurement target in the recognition image, wherein the database image contains a calibration result corresponding to the measurement target, and the measurement target is an area consistent with the calibration result.
在一个实施例中,所述标定结果包括与所述测量目标对应的测量目标的ROI框。In an embodiment, the calibration result includes the ROI box of the measurement target corresponding to the measurement target.
在一个实施例中,所述标定结果包括与所述待测测量项对应的测量目标的ROI框。所述定位步骤包括:采用基于滑窗的方法提取滑窗内的图像特征,将所述滑窗内的图像特征与所述标定结果的图像特征进行对比,判断所述滑窗内的图像特征与所述标定结果的图像特征是否匹配,当所述滑窗内的图像特征与所述标定结果的图像特征匹配时,确定当前滑窗为所述测量目标。In an embodiment, the calibration result includes the ROI box of the measurement target corresponding to the measurement item to be measured. The positioning step includes: extracting the image features in the sliding window by using a method based on the sliding window, comparing the image features in the sliding window with the image features of the calibration result, and judging the difference between the image features in the sliding window and Whether the image feature of the calibration result matches, and when the image feature in the sliding window matches the image feature of the calibration result, it is determined that the current sliding window is the measurement target.
在一个实施例中,在定位步骤(S24)中,采用机器学习算法学习预设数据库中数据库图像的标定结果的ROI框中的图像特征,其中,所学习的ROI框中的图像特征是将测量目标的ROI区域和非ROI区域区分开来的标定结果的图像特征。同时,采用机器学习算法提取在对图像识别步骤(S13)识别的识别图像进行滑窗遍历时得到的滑窗内的图像特征。其中,机器学习算法提取特征的方法包括但不限于主成分分析(Principle Components  Analysis,PCA)方法、线性评判分析(Linear Discriminant Analysis,LDA)方法、Harr特征提取方法、纹理特征提取方法等。In one embodiment, in the positioning step (S24), a machine learning algorithm is used to learn the image features in the ROI box of the calibration result of the database image in the preset database, where the learned image features in the ROI box are the measured The image feature of the calibration result that distinguishes the ROI area and non-ROI area of the target. At the same time, a machine learning algorithm is used to extract the image features in the sliding window obtained when the sliding window traversal is performed on the recognition image recognized in the image recognition step (S13). Among them, the method of machine learning algorithm to extract features includes but not limited to principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
在一个实施例中,所述标定结果包括与所述待测测量项对应的测量目标的ROI框,所述定位步骤包括:根据包含有所述测量目标的所述数据库图像中的所述标定结果的图像特征,对所述识别图像进行边框回归处理以获得边框区域,所述边框区域为所述测量目标。In one embodiment, the calibration result includes the ROI frame of the measurement target corresponding to the measurement item to be measured, and the positioning step includes: according to the calibration result in the database image containing the measurement target The frame regression processing is performed on the recognized image to obtain a frame area, and the frame area is the measurement target.
在一个实施例中,采用深度学习方法,构建堆叠基层卷积层和全连接层对预设数据库中的包含有所述待测测量项对应的测量目标的数据库图像的标定结果的ROI框中的图像特征进行学习和参数回归,所学习的ROI框中的图像特征是将测量目标的ROI区域和非ROI区域区分开来的标定结果的图像特征。根据所学习到的图像特征通过神经网络算法直接回归出识别图像中感兴趣的边框区域,这一边框区域即为要测量的测量目标。其中,神经网络算法包括但不限于R-CNN、Fast R-CNN、Faster-RCNN、SSD、YOLO等检测算法。In one embodiment, a deep learning method is used to construct a stacked base convolutional layer and a fully connected layer in the ROI box of the calibration result of the database image containing the measurement target corresponding to the measurement item in the preset database. Image feature learning and parameter regression, the learned image feature in the ROI frame is the image feature of the calibration result that distinguishes the ROI area and the non-ROI area of the measurement target. According to the learned image characteristics, the neural network algorithm directly returns the border area of interest in the recognized image, and this border area is the measurement target to be measured. Among them, neural network algorithms include but are not limited to R-CNN, Fast R-CNN, Faster-RCNN, SSD, YOLO and other detection algorithms.
在一个实施例中,所述标定结果包括对所述测量目标进行精确分割的掩膜,所述定位步骤包括:根据包含有所述测量目标的所述数据库图像中的所述标定结果的图像特征,采用语义分割算法识别所述识别图像中与所述标定结果相一致的所述测量目标的分割掩膜。In one embodiment, the calibration result includes a mask for accurately segmenting the measurement target, and the positioning step includes: according to image characteristics of the calibration result in the database image containing the measurement target , Using a semantic segmentation algorithm to identify the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image.
在一个实施例中,采用深度学习方法进行端到端的语义分割对识别图像进行分割。具体的,构建堆叠基层卷积层或者采用反卷积层对预设数据库中的对应于测量项的测量目标进行精确分割的掩膜进行采样,根据采样结果从识别图像中直接得到其中包含的与待测测量项相对应的测量目标的分割掩膜。In one embodiment, a deep learning method is used to perform end-to-end semantic segmentation to segment the recognition image. Specifically, construct a stacked base convolutional layer or use a deconvolutional layer to sample a mask that accurately divides the measurement target corresponding to the measurement item in the preset database, and directly obtain the AND contained in the recognition image according to the sampling result. The segmentation mask of the measurement target corresponding to the measurement item to be tested.
在通过定位步骤(S24)中得到识别图像中的测量目标之后,在自动测量步骤中,测量所述测量目标(S25),实现对解剖结构的自动测量过程。自动测量步骤中测量测量目标(S25)时包括对所述测量目标进行目标拟合得到所述测量目标的拟合方程;通过所述拟合方程确定所述测量目标的测量结果。After the measurement target in the recognition image is obtained in the positioning step (S24), in the automatic measurement step, the measurement target is measured (S25) to realize the automatic measurement process of the anatomical structure. In the automatic measurement step, measuring a measurement target (S25) includes performing target fitting on the measurement target to obtain a fitting equation of the measurement target; and determining the measurement result of the measurement target through the fitting equation.
在一个实施例中,定位步骤(S24)中的标定结果包括与待测测量项对应的测量目标的ROI框,在测量测量目标(S25)的过程中在识别图像定 位出的测量目标的边框区域内对测量目标进行目标拟合,得到拟合的直线、圆或者椭圆等方程,通过对前述方程进行计算得到测量结果。在测量目标的边框区域中对测量目标进行目标拟合,可以减少边框区域以外的其他结构对目标拟合产生的干扰,提升测量结果的准确性。In one embodiment, the calibration result in the positioning step (S24) includes the ROI frame of the measurement target corresponding to the measurement item to be measured. During the process of measuring the measurement target (S25), the frame area of the measurement target is located in the recognition image. Target fitting is performed on the measurement target inside, and the fitted line, circle or ellipse equations are obtained, and the measurement results are obtained by calculating the aforementioned equations. Performing target fitting on the measurement target in the frame area of the measurement target can reduce the interference of other structures outside the frame area on the target fitting, and improve the accuracy of the measurement result.
在一个实施例中,定位步骤(S24)中的标定结果包括与所述测量项对应的所述测量目标进行精确分割的掩膜,在测量测量目标(S25)的过程中对识别图像中与标定结果相一致的测量目标的分割掩膜的边缘进行目标拟合,拟合成直线、圆或者椭圆等方程,通过对前述方程进行计算得到测量结果。对识别图像中与标定结果相一致的测量目标的分割掩膜的边缘进行目标拟合,可以减小目标拟合的拟合误差,提升测量结果的准确性。In one embodiment, the calibration result in the positioning step (S24) includes a mask for accurately segmenting the measurement target corresponding to the measurement item, and the recognition image is compared with the calibration in the process of measuring the measurement target (S25). The edge of the segmented mask of the measurement target with consistent results is fitted to the target, and the equations such as a straight line, a circle or an ellipse are fitted, and the measurement result is obtained by calculating the foregoing equations. Target fitting is performed on the edge of the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image, which can reduce the fitting error of the target fitting and improve the accuracy of the measurement result.
本实施例提供在基于用户指定了待测测量项的情况下,对解剖结构进行自动测量的方法。根据用户指定的待测测量项,从获取的解剖结构的超声图像中自动识别出包含有待测测量项对应的测量目标的识别图像后直接对识别图像中的测量项进行测量。整个过程中,仅仅需要用户指定需要测量的待测测量项,不需要用户根据待测测量项识别包含待测测量项对应的测量目标的超声图像,更不需要对超声图像中的测量目标进行手动测量,简化操作过程,提高了对解剖结构进行测量的效率。This embodiment provides a method for automatically measuring an anatomical structure based on the user designating a measurement item to be measured. According to the measurement item specified by the user, the acquired ultrasound image of the anatomical structure automatically recognizes the identification image containing the measurement target corresponding to the measurement item to be measured, and then directly measures the measurement item in the identification image. In the whole process, the user only needs to specify the measurement item to be measured, and the user does not need to identify the ultrasonic image containing the measurement target corresponding to the measurement item to be measured according to the measurement item to be measured, and there is no need to manually perform the measurement target in the ultrasonic image. The measurement simplifies the operation process and improves the efficiency of measuring anatomical structures.
参看图5,对根据本申请的一个实施例进行解剖结构的自动测量方法提供在基于用户指定了待测测量项的情况下,对解剖结构进行自动测量的方法进行示例性介绍。本实施例提供在基于用户未指定待测测量项的情况下,对解剖结构所具有的测量性进行全部测量的方法。不需要用户指定,在获得了解剖结构的超声图像之后,直接对超声图像进行自动识别,以识别出包含测量目标的识别图像,并对识别图像中的测量目标进行自动测量,其中测量目标对应于解剖结构所具有的测量项。整个过程中,仅仅不需要用户进行任何关于测量项的操作,进一步简化操作过程,提高了对解剖结构进行测量的效率。Referring to FIG. 5, an automatic measurement method of an anatomical structure according to an embodiment of the present application provides an exemplary introduction to a method for automatic measurement of an anatomical structure based on a user designating a measurement item to be measured. This embodiment provides a method for measuring all the measurability of the anatomical structure based on the user not specifying the measurement item to be measured. The user does not need to specify. After the ultrasound image of the anatomical structure is obtained, the ultrasound image is directly automatically recognized to identify the recognition image containing the measurement target, and the measurement target in the recognition image is automatically measured, where the measurement target corresponds to The measurement items of the anatomical structure. In the whole process, the user only does not need to perform any operation on the measurement items, which further simplifies the operation process and improves the efficiency of measuring the anatomical structure.
参看图5,示出了根据本申请一实施例的解剖结构的自动测量方法的示意性流程图。Referring to FIG. 5, there is shown a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application.
本实施例提供的解剖结构的自动测量方法,用于对超声回波进行处理 后对被测生物组织的解剖结构进行自动测量,如图5所示。该方法包括:The automatic measurement method of anatomical structure provided in this embodiment is used to automatically measure the anatomical structure of the biological tissue to be tested after processing the ultrasonic echo, as shown in FIG. 5. The method includes:
步骤S31:图像获取步骤,获取至少两个超声图像,所述超声图像中的至少一个与生物体组织的至少一个解剖结构相关。本实施例提供的解剖结构的自动测量方法用于从至少两个关于生物体组织的解剖结构的超声图像中识别出包含测量目标的超声图像,并对超声图像中的测量目标进行测量,其中测量目标对应于解剖结构所具有的测量项。Step S31: an image acquisition step, acquiring at least two ultrasound images, at least one of the ultrasound images being related to at least one anatomical structure of the biological tissue. The automatic measurement method of anatomical structure provided in this embodiment is used to identify an ultrasonic image containing a measurement target from at least two ultrasonic images related to the anatomical structure of a biological tissue, and measure the measurement target in the ultrasonic image, wherein the measurement The target corresponds to the measurement items possessed by the anatomical structure.
在超声检测中,由于对生物体组织进行超声检测时,往往需要通过探头先对生物体组织进行超声回波的采集,经过处理获得多个关于生物体组织的解剖结构的超声图像之后,再选择解剖结构的测量项作为待测测量项进行测量,传统测量方法中,往往基于用户指定的待测测量项,用户再进行手动识别包含有待测测量项对应的测量目标的超声图像后进行测量,这一过程增加需要用户指定测量项操作;由于用户想要测量的测量项往往是固定的,需要对解剖结构所具有的测量项进行全部测量,在本实施例中,不基于用户指定而对解剖结构具有的测量项全部进行测量,减少了用户的操作步骤,简化对解剖结构的测量操作。In ultrasonic testing, when performing ultrasonic testing on biological tissues, it is often necessary to collect ultrasonic echoes of biological tissues through a probe, and after processing to obtain multiple ultrasound images about the anatomical structure of biological tissues, select The measurement items of the anatomical structure are measured as the measurement items to be measured. In traditional measurement methods, the measurement items to be measured are often specified by the user, and the user manually recognizes the ultrasound image containing the measurement target corresponding to the measurement item to be measured and then performs the measurement. This process increases the need for the user to specify the measurement item operation; because the measurement item that the user wants to measure is often fixed, it is necessary to measure all the measurement items of the anatomical structure. In this embodiment, the anatomy is not specified based on the user. All measurement items of the structure are measured, which reduces the user's operation steps and simplifies the measurement operation of the anatomical structure.
步骤S32:图像识别步骤,从所述超声图像中识别出包含有测量项对应的测量目标的识别图像,其中所述识别图像为所述超声图像中的至少一个图像。由于本实施例对解剖结构具有的所有测量项全部进行测量,从超声图像中识别出的包含解剖结构的测量项对应的测量目标的识别图像中的测量目标均需测量,即,解剖结构的测量项均为待测测量项。Step S32: an image recognition step, recognizing a recognition image containing a measurement target corresponding to a measurement item from the ultrasound image, wherein the recognition image is at least one of the ultrasound images. Since this embodiment measures all the measurement items that the anatomical structure has, the measurement targets in the identification image of the measurement target corresponding to the measurement item that contains the anatomical structure identified from the ultrasound image need to be measured, that is, the measurement of the anatomical structure All items are to-be-tested measurement items.
在一个实施例中,所述生物体组织的至少一个解剖结构具有至少一个特征测量项,图像识别步骤(S32)包括:将所述超声图像的图像特征与预设数据库中包含有所述至少一个特征测量项中的任意一个对应的测量目标的数据库图像的图像特征进行对比判断所述超声图像的图像特征与所述数据库图像的图像特征是否匹配,当所述超声图像的图像特征与所述数据库图像的图像特征匹配时,确定所述超声图像为所述识别图像,所述识别图像中包含的测量目标对应的所述测量项为待测测量项,所述测量目标对应于与所述识别图像的图像特征相匹配的所述数据库图像中包含的所述测量目标。In an embodiment, the at least one anatomical structure of the biological tissue has at least one feature measurement item, and the image recognition step (S32) includes: combining the image feature of the ultrasound image with a preset database containing the at least one feature. The image feature of the database image of the measurement target corresponding to any one of the feature measurement items is compared to determine whether the image feature of the ultrasound image matches the image feature of the database image, and when the image feature of the ultrasound image matches the image feature of the database When the image features of the image are matched, it is determined that the ultrasound image is the recognition image, the measurement item corresponding to the measurement target contained in the recognition image is the measurement item to be measured, and the measurement target corresponds to the recognition image The image feature matches the measurement target contained in the database image.
例如,在腹部超声检测时,图像获取步骤(S31)中获取两个超声图像, 其中一张超声图像是关于肝脏的超声图像,另一张超声图像是医生在操作中误操作而获得的超声图像,其与任何生物体组织的解剖结构无关。肝脏具有肝脏尺寸的这一特征测量项,在图像识别步骤(S32)中将图像获取步骤(S31)获取两个超声图像中的每一个与预设数据库中包含肝脏尺寸的数据库图像(也是关于肝脏的超声图像)进行对比,判断肝脏的超声图像的图像特征与数据库图像的图像特征是否匹配,若匹配,则将其确定为识别图像,若不匹配,则将抛弃;其中匹配的识别图像中包含的肝脏是测量目标,其对应于肝脏尺寸这一特征测量项作为待测测量项。For example, during abdominal ultrasound testing, two ultrasound images are acquired in the image acquisition step (S31), one of which is an ultrasound image about the liver, and the other is an ultrasound image obtained by a doctor's misoperation during the operation , It has nothing to do with the anatomical structure of any biological tissue. The liver has this feature measurement item of liver size. In the image recognition step (S32), the image acquisition step (S31) acquires each of the two ultrasound images and the database image containing the liver size in the preset database (also about the liver). The ultrasound image of the liver) is compared to determine whether the image feature of the liver ultrasound image matches the image feature of the database image. If it matches, it will be determined as a recognition image. If it does not match, it will be discarded; the matching recognition image contains The liver of is the measurement target, and the characteristic measurement item corresponding to the size of the liver is used as the measurement item to be measured.
在一个实施例中,所述生物体组织的至少一个解剖结构中的任意一个或者两个具有至少两个特征测量项,识别图像中包含至少两个待测测量项分别对应的至少两个测量目标,图像识别步骤(S32)包括:对所述识别图像中的所述至少两个测量目标进行分类,以使所述至少两个测量目标中的每一个分别对应于与所述至少两个特征测量项中一个。由于生物体组织的解剖结构往往具有多个测量项,并且多个测量项往往出现在同一超声图像中,在图像识别步骤(S32)识别出包含有多个测量目标的识别图像之后,还需要对识别图像中所包含的测量目标进行分类,即,根据解剖结构具有的至少两个测量项对识别图像中所包含的测量项进行分类,得到分别与解剖结构具有的至少两个测量项一一对应的测量目标。In an embodiment, any one or two of the at least one anatomical structure of the biological tissue has at least two characteristic measurement items, and the identification image contains at least two measurement targets corresponding to the at least two measurement items to be measured, respectively , The image recognition step (S32) includes: classifying the at least two measurement targets in the recognition image, so that each of the at least two measurement targets respectively corresponds to the at least two feature measurement One of the items. Since the anatomical structure of biological tissues often have multiple measurement items, and multiple measurement items often appear in the same ultrasound image, after the image recognition step (S32) recognizes the recognition image containing multiple measurement targets, it is necessary to The measurement targets contained in the recognition image are classified, that is, the measurement items contained in the recognition image are classified according to the at least two measurement items of the anatomical structure, and one-to-one correspondence with at least two measurement items of the anatomical structure is obtained. Measurement target.
例如,在产科超声检测时,在图像获取步骤(S31)中获取两个或者多个关于胎儿的超声图像,胎儿具有双顶径、头围、腹围、股骨长等测量项,将图像获取步骤(S31)获取的胎儿的两个或多个胎儿的超声图像中的每一个分别与预设数据库中包含双顶径、头围、腹围、股骨长中的至少一项对应的测量目标的数据库图像(也是关于胎儿的超声图像)进行对比,确定识别图像。由于胎儿具有双顶径、头围、腹围、股骨长这四个测量项,而双顶径、头围对应的测量目标(双顶径对应的胎儿头部两侧顶骨区域的这一测量目标和头围对应胎儿头部枕骨到前额鼻根区域的这一测量目标)往往出现在同一超声图像中,在确定了包含双顶径、头围这两项待测测量项对应的测量目标的识别图像之后,需要进一步对识别图像中这两项待测测量项对应的测量目标进行区分以确定识别图像哪一测量目标对应于头围,哪一待测量目标对应于双顶径。For example, in the obstetric ultrasound examination, two or more ultrasound images about the fetus are acquired in the image acquisition step (S31). The fetus has measurement items such as double parietal diameter, head circumference, abdominal circumference, and femur length. (S31) Each of the acquired ultrasound images of two or more fetuses of the fetus is respectively a database containing measurement targets corresponding to at least one of double parietal diameter, head circumference, abdominal circumference, and femoral length in the preset database The images (also about the ultrasound images of the fetus) are compared to determine the recognition image. Since the fetus has four measurement items of double parietal diameter, head circumference, abdominal circumference, and femoral length, the double parietal diameter and head circumference correspond to the measurement target (the double parietal diameter corresponds to this measurement target of the parietal area on both sides of the fetal head The measurement target from the occipital bone to the forehead nasal root area corresponding to the head circumference often appears in the same ultrasound image, and the identification of the measurement target corresponding to the two measurement items including double parietal diameter and head circumference is determined. After the image is imaged, it is necessary to further distinguish the measurement targets corresponding to the two measurement items to be measured in the recognition image to determine which measurement target corresponds to the head circumference and which measurement target corresponds to the double parietal diameter in the recognition image.
在一个实施例中,采用机器学习算法学习预设数据库中的数据库图像的可以区分不同测量项的图像特征,同时,采用机器学习方法提取图像获取步骤(S31)获取的超声图像的图像特征,将学习到的数据库图像的图像特征与超声图像的图像特征进行匹配,获得与学习到的图像特征相匹配的超声图像为识别图像。在具有多个待测测量项的情况下,根据学习到的可以区分不同测量项的图像特征对测量目标进行分类,实现根据解剖结构所具有的测量项将识别图像中具有的测量目标识别出来。其中,机器学习算法提取特征的方法包括但不限于主成分分析(Principle Components Analysis,PCA)方法、线性评判分析(Linear Discriminant Analysis,LDA)方法、Harr特征提取方法、纹理特征提取方法等。根据学习到的可以区分不同测量目标的图像特征对测量目标进行分类,以对识别图像中的测量目标进行分类,采用的分类判别器包括但不限于最近邻分类(K nearest neighbor,KNN)、支持向量机(Suport vector maehine,SVM)随机森林、神经网络等判别器。In one embodiment, a machine learning algorithm is used to learn the image features of the database images in the preset database that can distinguish different measurement items. At the same time, the machine learning method is used to extract the image features of the ultrasound image acquired in the image acquisition step (S31), and The learned image feature of the database image is matched with the image feature of the ultrasound image, and the ultrasound image matching the learned image feature is obtained as the recognition image. In the case of multiple measurement items to be measured, the measurement targets are classified according to the learned image features that can distinguish different measurement items, so as to realize the recognition of the measurement targets in the recognition image according to the measurement items possessed by the anatomical structure. Among them, the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc. According to the learned image features that can distinguish different measurement targets, the measurement targets are classified to classify the measurement targets in the recognition image. The classification discriminators used include but are not limited to nearest neighbor classification (K nearest neighbor, KNN), support Vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
在一个实施例中,采用深度学习方法,构建堆叠卷积层和全连接层对预设数据库中的数据库图像的图像特征进行学习,学习出能够将不同测量目标进行区分的图像特征,根据这些图像特征识别超声图像中的识别图像。在解剖结构具有至少两个待测测量项的情况下,根据学习到的图像特征对识别图像中的测量目标进行分类。其中,深度学习方法包括但不限于,VGG网络,ResNet残差网络,Inception模块,AlexNe t深度网络等。In one embodiment, a deep learning method is used to construct a stacked convolutional layer and a fully connected layer to learn image features of database images in a preset database, and learn image features that can distinguish different measurement targets. According to these images Feature recognition is the recognition image in ultrasound images. In the case that the anatomical structure has at least two measurement items to be measured, the measurement targets in the recognition image are classified according to the learned image features. Among them, deep learning methods include, but are not limited to, VGG network, ResNet residual network, Inception module, AlexNe ot deep network, etc.
步骤S25:自动测量步骤,测量所述识别图像中的所述测量目标。在自动测量步骤(S25)中,对识别图像中的测量目标进行测量,获得解剖结构的测量结果。在图像识别步骤(S23)识别出的包含有测量目标的识别图像,以及对识别图像包含有多个测量目标时对测量目标进行分类的情况下,直接对识别图像中的测量目标进行边缘检测以进行目标拟合得到目标拟合方程的方法进行自动测量。Step S25: an automatic measurement step, measuring the measurement target in the recognition image. In the automatic measurement step (S25), the measurement target in the recognition image is measured to obtain the measurement result of the anatomical structure. In the case of the recognition image that contains the measurement target recognized in the image recognition step (S23), and the measurement target is classified when the recognition image contains multiple measurement targets, the edge detection is directly performed on the measurement target in the recognition image. The method of performing target fitting to obtain the target fitting equation is used for automatic measurement.
在一个实施例中,自动测量步骤(S25)包括:In one embodiment, the automatic measurement step (S25) includes:
通过边缘检测算法提取所述测量目标对应的轮廓。边缘检测算法包括但不限于:采用Sobel算子、canny算子等根据超声图像的像素点、灰度加权值等检测测量目标的轮廓。The contour corresponding to the measurement target is extracted by an edge detection algorithm. Edge detection algorithms include, but are not limited to: using Sobel operator, canny operator, etc. to detect the contour of the measurement target based on the pixel points and gray-scale weighted values of the ultrasound image.
对所述测量目标对应的轮廓进行拟合,以获得所述测量目标对应的拟合方程。采用直线、圆、椭圆等检测算法对待测测量项的轮廓进行拟合得到拟合方程,拟合算法包括但不限于最小二乘估计、Hough变换、Randon变换、Ransac等算法。Fitting the contour corresponding to the measurement target to obtain a fitting equation corresponding to the measurement target. Detecting algorithms such as lines, circles, ellipses, etc. are used to fit the contour of the measurement item to be measured to obtain a fitting equation. The fitting algorithms include but are not limited to least squares estimation, Hough transform, Randon transform, Ransac and other algorithms.
通过所述拟合方程确定所述测量目标的测量结果。根据上述步骤中拟合算法得到的拟合方程确定测量结果。如果上述步骤中得到的拟合方程式圆或者椭圆的方程,其为自动测量的结果。如果上述步骤中得到的拟合方程为直线,可结合测量目标的端点的灰度变化进一步定位端点,实现自动测量。以产科超声检测中的测量股骨长为例,股骨表现为高亮的线状结构,检测到股骨所在的直线后,可在直线上检测两个股骨灰度梯度最大的点作为股骨的两个端点。The measurement result of the measurement target is determined by the fitting equation. The measurement result is determined according to the fitting equation obtained by the fitting algorithm in the above steps. If the fitting equation circle or ellipse equation is obtained in the above steps, it is the result of automatic measurement. If the fitting equation obtained in the above steps is a straight line, the end point can be further located in conjunction with the gray change of the end point of the measurement target to realize automatic measurement. Taking the measurement of femur length in obstetric ultrasound testing as an example, the femur appears as a bright linear structure. After detecting the straight line where the femur is located, two points with the largest gray gradient of the femur can be detected on the straight line as the two end points of the femur.
参看图6对根据本申请的一个实施例的本实施例处理器进行解剖结构的自动测量方法提供在基于用户指定了待测测量项的情况下,对解剖结构进行自动测量的方法进行示例性介绍。本实施例提供在基于用户未指定待测测量项的情况下,对解剖结构所具有的测量性进行全部测量的方法。不需要用户指定,在获得了解剖结构的超声图像之后,直接对超声图像进行自动识别,以识别出包含有测量目标识别图像,并对识别图像中的测量目标进行自动测量,其中测量目标对应于解剖结构所具有的测量项。整个过程中,不需要用户进行任何关于测量项的操作,进一步简化操作过程,提高了对解剖结构进行测量的效率。与图5中示出的实施例所不同之处在于,在本实施例中,在图像识别步骤之后增加定位步骤,消除在测量步骤中测量目标周围结构对测量结果的影响。下面参看图6,对本实施例的解剖结构的自动测量方法进行示例性说明。Referring to FIG. 6, the method for automatically measuring anatomical structures performed by the processor of this embodiment according to an embodiment of the present application provides an exemplary introduction to the method for automatically measuring anatomical structures based on the user designating the measurement item to be measured. . This embodiment provides a method for measuring all the measurability of the anatomical structure based on the user not specifying the measurement item to be measured. The user does not need to specify. After the ultrasound image of the anatomical structure is obtained, the ultrasound image is automatically recognized to identify the recognition image containing the measurement target, and the measurement target in the recognition image is automatically measured, where the measurement target corresponds to The measurement items of the anatomical structure. In the whole process, the user does not need to perform any operation on the measurement items, which further simplifies the operation process and improves the efficiency of measuring the anatomical structure. The difference from the embodiment shown in FIG. 5 is that in this embodiment, a positioning step is added after the image recognition step to eliminate the influence of the surrounding structure of the measurement target on the measurement result in the measurement step. Next, referring to FIG. 6, the automatic measurement method of the anatomical structure of this embodiment will be exemplified.
参看图6,示出了根据本申请一实施例的解剖结构的自动测量方法的示意性流程图。其中,图像识别步骤(S41)和图像识别步骤(S42)与图5中示出图像识别步骤(S31)和图像识别步骤(S32)一致,所不同之处在于,在图像识别步骤(S42)之后增加定位步骤(S43),再进行自动测量步骤(S44)。下面对图4中示出测量项定位步骤(S42)和自动测量步骤(S44)进行详细描述。Referring to FIG. 6, there is shown a schematic flowchart of an automatic measurement method of an anatomical structure according to an embodiment of the present application. Among them, the image recognition step (S41) and the image recognition step (S42) are consistent with the image recognition step (S31) and the image recognition step (S32) shown in FIG. 5. The difference is that after the image recognition step (S42) Add a positioning step (S43), and then perform an automatic measurement step (S44). The following describes the measurement item positioning step (S42) and the automatic measurement step (S44) shown in FIG. 4 in detail.
步骤S43:定位步骤,在所述识别图像中对所述测量目标进行定位。Step S43: a positioning step, positioning the measurement target in the recognition image.
在图像识别步骤(S42)中,仅仅获得了包含有测量目标的识别图像,而不知道在实际测量中上述测量目标的位置。直接对识别图像中的测量项进行测量需要对整个图像进行检测,获得的边缘检测结果容易受到测量项周围的结构的影响。为此,在定位步骤(S43)中,对识别图像中的测量目标进行定位,再在自动测量步骤中对测量目标进行目标拟合,可以减少测量目标周围的结构的影响,使测量结果更加准确。In the image recognition step (S42), only the recognition image containing the measurement target is obtained, and the position of the measurement target in the actual measurement is not known. Directly measuring the measurement items in the recognition image requires detecting the entire image, and the obtained edge detection results are easily affected by the structure around the measurement items. For this reason, in the positioning step (S43), the measurement target in the recognition image is located, and then the measurement target is fitted to the measurement target in the automatic measurement step, which can reduce the influence of the structure around the measurement target and make the measurement result more accurate .
在一个实施例中,在定位步骤(S43)中将所述识别图像的图像特征与预设数据库中的包含有对应于所述待测测量项的数据库图像的图像特征进行对比分析,以对所述识别图像中的所述测量目标进行定位,其中,所述数据库图像包含有对应于所述待测量目标的标定结果,所述测量目标为与所述标定结果相一致的区域。In one embodiment, in the positioning step (S43), the image feature of the recognition image is compared and analyzed with the image feature of the database image corresponding to the measurement item to be measured in the preset database, so as to compare all the features of the image. The measurement target in the recognition image is positioned, wherein the database image contains a calibration result corresponding to the target to be measured, and the measurement target is an area consistent with the calibration result.
在一个实施例中,所述标定结果包括所述测量目标的ROI框。In an embodiment, the calibration result includes the ROI box of the measurement target.
在一个实施例中,所述标定结果包括与所述待测测量项对应的测量目标的ROI框。所述定位步骤包括:采用基于滑窗的方法提取滑窗内的图像特征,将所述滑窗内的图像特征与所述标定结果的图像特征进行对比,判断所述滑窗内的图像特征与所述标定结果的图像特征是否匹配,当所述滑窗内的图像特征与所述标定结果的图像特征匹配时,确定当前滑窗为所述测量目标。In an embodiment, the calibration result includes the ROI box of the measurement target corresponding to the measurement item to be measured. The positioning step includes: extracting the image features in the sliding window by using a method based on the sliding window, comparing the image features in the sliding window with the image features of the calibration result, and judging the difference between the image features in the sliding window and Whether the image feature of the calibration result matches, and when the image feature in the sliding window matches the image feature of the calibration result, it is determined that the current sliding window is the measurement target.
在一个实施例中,在定位步骤(S43)中,采用机器学习算法学习预设数据库中数据库图像的标定结果的ROI框中的图像特征,其中,所学习的ROI框中的图像特征是将测量目标的ROI区域和非ROI区域区分开来的标定结果的图像特征。同时,采用机器学习算法提取在对图像识别步骤(S13)识别的识别图像进行滑窗遍历时得到的滑窗内的图像特征。其中,机器学习算法提取特征的方法包括但不限于主成分分析(Principle Components Analysis,PCA)方法、线性评判分析(Linear Discriminant Analysis,LDA)方法、Harr特征提取方法、纹理特征提取方法等。In one embodiment, in the positioning step (S43), a machine learning algorithm is used to learn the image features in the ROI frame of the calibration result of the database image in the preset database, wherein the learned image feature in the ROI frame is the measurement The image feature of the calibration result that distinguishes the ROI area and non-ROI area of the target. At the same time, a machine learning algorithm is used to extract the image features in the sliding window obtained when the sliding window traversal is performed on the recognition image recognized in the image recognition step (S13). Among them, the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc.
在一个实施例中,所述标定结果包括与所述待测测量项对应的测量目标的ROI框,所述定位步骤包括:根据包含有对应于所述待测测量项的测量目标的所述数据库图像中的所述标定结果的图像特征,对所述识别图像 进行边框回归处理以获得边框区域,所述边框区域为所述测量目标。In one embodiment, the calibration result includes the ROI frame of the measurement target corresponding to the measurement item to be measured, and the positioning step includes: according to the database containing the measurement target corresponding to the measurement item to be measured For the image feature of the calibration result in the image, frame regression processing is performed on the recognition image to obtain a frame area, and the frame area is the measurement target.
在一个实施例中,采用深度学习方法,构建堆叠基层卷积层和全连接层对预设数据库中的对应于待测测量项的数据库图像的标定结果的ROI框中的图像特征进行学习和参数回归,所学习的ROI框中的图像特征是将测量目标的ROI区域和非ROI区域区分开来的标定结果的图像特征。根据所学习到的图像特征通过神经网络算法直接回归出识别图像中感兴趣的边框区域,这一边框区域即为要测量的测量目标。其中,神经网络算法包括但不限于R-CNN、Fast R-CNN、Faster-RCNN、SSD、YOLO等目标检测算法。In one embodiment, a deep learning method is used to construct a stacked base convolutional layer and a fully connected layer to learn and parameterize the image features in the ROI frame of the calibration result of the database image corresponding to the measurement item to be measured in the preset database In regression, the learned image feature in the ROI frame is the image feature of the calibration result that distinguishes the ROI area of the measurement target from the non-ROI area. According to the learned image characteristics, the neural network algorithm directly returns the border area of interest in the recognized image, and this border area is the measurement target to be measured. Among them, neural network algorithms include but are not limited to R-CNN, Fast R-CNN, Faster-RCNN, SSD, YOLO and other target detection algorithms.
在一个实施例中,所述标定结果包括对与所述测量项对应的所述测量目标进行精确分割的掩膜,所述定位步骤包括:根据包含有对应于所述待测测量项的测量目标的所述数据库图像中的所述标定结果的图像特征,采用语义分割算法识别所述识别图像中与所述标定结果相一致的所述测量目标的分割掩膜。In an embodiment, the calibration result includes a mask for accurately segmenting the measurement target corresponding to the measurement item, and the positioning step includes: according to the measurement target corresponding to the measurement item to be measured For the image features of the calibration result in the database image, a semantic segmentation algorithm is used to identify the segmentation mask of the measurement target in the recognition image that is consistent with the calibration result.
在一个实施例中,采用基于深度学习方法的端到端的语义分割网络方法对识别图像进行网络分割。具体的,构建堆叠基层卷积层或者采用反卷积层对预设数据库中的对应于测量目标的标定区域进行精确网络分割的掩膜进行采样,根据采样结果从识别图像中直接得到其中包含的与待测测量项相对应的测量目标的分割掩膜。采用的语义分割网络包括但不限于,全卷积网络(Fully convolutional networks,FCN)、U-Net卷积网络(U-Net convolutional networks)等。In one embodiment, an end-to-end semantic segmentation network method based on a deep learning method is used to perform network segmentation on the recognition image. Specifically, construct a stacked base convolution layer or use a deconvolution layer to sample the mask for precise network segmentation of the calibration area corresponding to the measurement target in the preset database, and directly obtain the contained image from the recognition image according to the sampling result. The segmentation mask of the measurement target corresponding to the measurement item to be measured. Semantic segmentation networks used include, but are not limited to, Fully Convolutional Networks (FCN), U-Net Convolutional Networks (U-Net Convolutional Networks), etc.
在一个实施例中,在图像识别步骤(S42)中获取的识别图像中包括至少两个所述待测测量项分别对应的测量目标,所述定位步骤(S43)还包括:对所述测量目标进行分类,以使所述测量目标与所述至少两个所述待测测量项一一对应。由于图像识别步骤(S42)中获取的识别图像中包括至少两个所述待测测量项分别对应的测量目标,在进行测量项定位的过程中不仅仅需要定位出识别图像中测量目标的位置,还对每一个位置所属的测量测量目标类别进行区分,在后续自动测量步骤中对测量目标进行自动测量后即获得测量结果所对应的测量项。In an embodiment, the recognition image obtained in the image recognition step (S42) includes at least two measurement targets corresponding to the measurement items to be measured, and the positioning step (S43) further includes: Classification is performed so that the measurement target corresponds to the at least two measurement items to be measured in a one-to-one correspondence. Since the recognition image obtained in the image recognition step (S42) includes at least two measurement targets corresponding to the measurement items to be measured, it is not only necessary to locate the position of the measurement target in the recognition image during the process of locating the measurement items. It also distinguishes the measurement target category to which each location belongs, and obtains the measurement item corresponding to the measurement result after the measurement target is automatically measured in the subsequent automatic measurement step.
例如,在产科超声检测时,图像识别步骤(S42)中识别的胎儿的识别 图像中同时包含具有头围对应的胎儿头部枕骨到前额鼻根区域这一测量目标、双顶径对应的胎儿头部两侧顶骨区域的这一测量目标这两个测量目标,在定位之后的测量过程中不知道测量的是头围还是双顶径,需要将胎儿头部枕骨到前额鼻根区域这一测量目标和胎儿头部两侧顶骨区域的这一测量目标进行区分,获得分别与头围和双顶径对应的测量目标。For example, in obstetric ultrasound detection, the recognition image of the fetus recognized in the image recognition step (S42) also contains the measurement target of the occipital bone of the fetal head corresponding to the head circumference and the fetal head corresponding to the double parietal diameter. The two measurement targets of the parietal area on both sides of the fetus. During the measurement process after positioning, it is not known whether the measurement is head circumference or double parietal diameter. It is necessary to measure the occipital bone of the fetal head to the forehead nasal root area. Distinguish this measurement target from the parietal area on both sides of the fetal head, and obtain measurement targets corresponding to the head circumference and double parietal diameter respectively.
在一个实施例中,将所述测量目标的图像特征与预设数据库中的包含对所述待测测量项对应的所述测量目标进行标定了的的所述数据库图像的图像特征进行对比,以判断所述测量目标的图像特征与所述标定结果的图像特征是否匹配,当所述测量目标的图像特征与所述标定结果的图像特征匹配时,判断所述测量目标为对应于所述待测测量项。In one embodiment, the image feature of the measurement target is compared with the image feature of the database image that is calibrated for the measurement target corresponding to the measurement item to be measured in a preset database, to Determine whether the image feature of the measurement target matches the image feature of the calibration result, and when the image feature of the measurement target matches the image feature of the calibration result, it is determined that the measurement target corresponds to the to-be-measured Measurement items.
仍然以产科超声检测时在一个识别图像中同时测量头围和腹围这两项测量项为示例进行说明,在定位步骤(S43)中识别的胎儿的识别图像中确定的每一测量目标的图像特征与预设数据库中包含有头围的数据库图像的对头围进行标定后的标定结果的图像特征进行对比,当测量目标的图像特征与对头围进行标定后的标定结果的图像特征匹配时判断当前对比的测量目标为头围。Still taking the two measurement items of head circumference and abdominal circumference in one recognition image during obstetric ultrasound detection as an example for explanation, the image of each measurement target determined in the recognition image of the fetus recognized in the positioning step (S43) The feature is compared with the image feature of the calibration result of the head circumference calibration that contains the database image of the head circumference in the preset database. When the image feature of the measurement target matches the image feature of the calibration result of the head circumference calibration, the current is judged The measurement target for comparison is the head circumference.
在一个实施例中,采集机器学习算法学习预设数据库中的数据库图像的区分不同标定区域的图像特征,同时,采用机器学习方法提取定位步骤(S43)中定位的识别图像中的测量目标的图像特征,将学习到的数据库图像的图像特征与测量目标的图像特征进行匹配,获得与学习到的图像特征相匹配的测量目标对应于当前标定区域的测量目标。在一个识别图像中具有多个待测测量项的情况下,根据学习到的可以区分不同标定区域的图像特征对测量目标进行分类,实现根据解剖结构所具有的测量项将识别图像中具有的测量项识别出来。其中,机器学习算法提取特征的方法包括但不限于主成分分析(Principle Components Analysis,PCA)方法、线性评判分析(Linear Discriminant Analysis,LDA)方法、Harr特征提取方法、纹理特征提取方法等。根据学习到的可以区分不同测量项的图像特征对测量项进行分类,以对识别图像中的测量项进行分类,采用的分类判别器包括但不限于最近邻分类(K nearest neighbor,KNN)、支持向量机(Suport vector maehine,SVM)随机森林、神经网络等判别器。In one embodiment, the machine learning algorithm is collected to learn the image features of the database image in the preset database to distinguish different calibration areas, and at the same time, the machine learning method is used to extract the image of the measurement target in the identification image positioned in the positioning step (S43) Feature: Match the learned image feature of the database image with the image feature of the measurement target, and obtain the measurement target matching the learned image feature corresponding to the measurement target of the current calibration area. In the case that there are multiple measurement items to be measured in a recognition image, the measurement targets are classified according to the learned image features that can distinguish different calibration areas, so that the measurement items in the recognition image will be recognized according to the measurement items of the anatomical structure Items are recognized. Among them, the method of extracting features by machine learning algorithms includes, but is not limited to, principal component analysis (Principle Components Analysis, PCA) method, linear discriminant analysis (Linear Discriminant Analysis, LDA) method, Harr feature extraction method, texture feature extraction method, etc. According to the learned image features that can distinguish different measurement items, the measurement items are classified to classify the measurement items in the recognition image. The classification discriminator used includes but not limited to the nearest neighbor classification (K nearest neighbor, KNN), support Vector machine (Suport vector maehine, SVM) random forest, neural network and other discriminators.
在通过定位步骤(S43)中得到识别图像中的测量目标之后,在自动测量步骤(S44)中,测量测量目标,实现对解剖结构的自动测量过程。自动测量步骤(S44)测量测量目标的步骤包括对所述测量目标进行目标拟合得到所述测量目标的拟合方程;通过所述拟合方程确定所述测量目标的测量结果。After the measurement target in the recognition image is obtained in the positioning step (S43), in the automatic measurement step (S44), the measurement target is measured to realize the automatic measurement process of the anatomical structure. The automatic measurement step (S44) The step of measuring a measurement target includes performing target fitting on the measurement target to obtain a fitting equation of the measurement target; and determining the measurement result of the measurement target through the fitting equation.
在一个实施例中,定位步骤(S43)中的标定结果包括与待测测量项对应的测量目标的ROI框,在自动测量步骤自动测量步骤(S44)测量测量目标的过程中在识别图像定位出的测量目标的边框区域内对测量目标进行目标拟合,得到拟合的直线、圆或者椭圆等方程,通过对前述方程进行计算得到测量结果。在测量目标的边框区域中对测量目标进行目标拟合,可以减少边框区域以外的其他结构对目标拟合产生的干扰,提升测量结果的准确性。In one embodiment, the calibration result in the positioning step (S43) includes the ROI frame of the measurement target corresponding to the measurement item to be measured. In the automatic measurement step, the automatic measurement step (S44) measures the measurement target in the process of measuring the measurement target. Target fitting is performed on the measurement target within the frame area of the measurement target to obtain the fitted line, circle, or ellipse equation, and the measurement result is obtained by calculating the foregoing equation. Performing target fitting on the measurement target in the frame area of the measurement target can reduce the interference of other structures outside the frame area on the target fitting, and improve the accuracy of the measurement result.
在一个实施例中,定位步骤(S43)中的标定结果包括与所述测量项对应的所述测量目标进行精确分割的掩膜,在自动测量步骤自动测量步骤(S44)测量测量目标的过程中对识别图像中与标定结果相一致的测量目标的分割掩膜的边缘进行目标拟合,拟合成直线、圆或者椭圆等方程,通过对前述方程进行计算得到测量结果。对识别图像中与标定结果相一致的测量目标的分割掩膜的边缘进行目标拟合,可以减小目标拟合的拟合误差,提升测量结果的准确性。In one embodiment, the calibration result in the positioning step (S43) includes a mask for accurately segmenting the measurement target corresponding to the measurement item. During the automatic measurement step (S44), the measurement target is measured. Target fitting is performed on the edge of the segmentation mask of the measurement target consistent with the calibration result in the recognition image, and the equations such as straight line, circle or ellipse are fitted, and the measurement result is obtained by calculating the foregoing equation. Target fitting is performed on the edge of the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image, which can reduce the fitting error of the target fitting and improve the accuracy of the measurement result.
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本申请的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本申请的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本申请的范围之内。Although the exemplary embodiments have been described herein with reference to the accompanying drawings, it should be understood that the above-described exemplary embodiments are merely exemplary, and are not intended to limit the scope of the present application thereto. Those of ordinary skill in the art can make various changes and modifications therein without departing from the scope and spirit of the present application. All these changes and modifications are intended to be included within the scope of the present application as required by the appended claims.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个设备,或一些特征可以忽略,或不执行。In the several embodiments provided in this application, it should be understood that the disclosed device and method may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the instructions provided here, a lot of specific details are explained. However, it can be understood that the embodiments of the present application can be practiced without these specific details. In some instances, well-known methods, structures, and technologies are not shown in detail, so as not to obscure the understanding of this specification.
类似地,应当理解,为了精简本申请并帮助理解各个发明方面中的一个或多个,在对本申请的示例性实施例的描述中,本申请的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本申请的方法解释成反映如下意图:即所要求保护的本申请要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本申请的单独实施例。Similarly, it should be understood that, in order to simplify this application and help understand one or more of the various aspects of the invention, in the description of the exemplary embodiments of this application, the various features of this application are sometimes grouped together into a single embodiment or figure. , Or in its description. However, the method of this application should not be interpreted as reflecting the intention that the claimed application requires more features than the features explicitly recorded in each claim. More precisely, as reflected in the corresponding claims, the point of the invention is that the corresponding technical problems can be solved with features that are less than all the features of a single disclosed embodiment. Therefore, the claims following the specific embodiment are thus explicitly incorporated into the specific embodiment, wherein each claim itself serves as a separate embodiment of the application.
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的替代特征来代替。Those skilled in the art can understand that in addition to mutual exclusion between the features, any combination of all features disclosed in this specification (including the accompanying claims, abstract, and drawings) and any method or device disclosed in this manner can be used. Processes or units are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本申请的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。In addition, those skilled in the art can understand that although some embodiments described herein include certain features included in other embodiments but not other features, the combination of features of different embodiments means that they are within the scope of the present application. Within and form different embodiments. For example, in the claims, any one of the claimed embodiments can be used in any combination.
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(Digital Signal  Processor,DSP)来实现根据本申请实施例的一些模块的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。The various component embodiments of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them. Those skilled in the art should understand that a microprocessor or a digital signal processor (Digital Signal Processor, DSP) may be used in practice to implement some or all of the functions of some modules according to the embodiments of the present application. This application can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein. Such a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals. Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.
应该注意的是上述实施例对本申请进行说明而不是对本申请进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。本申请可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and those skilled in the art can design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses should not be constructed as a limitation to the claims. The application can be realized by means of hardware including several different elements and by means of a suitably programmed computer. In the unit claims listing several devices, several of these devices may be embodied in the same hardware item. The use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names.
以上所述,仅为本申请的具体实施方式或对具体实施方式的说明,本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。本申请的保护范围应以权利要求的保护范围为准。The above are only specific implementations of this application or descriptions of specific implementations. The scope of protection of this application is not limited to this. Anyone familiar with the technical field within the technical scope disclosed in this application can easily Any change or replacement should be covered within the scope of protection of this application. The protection scope of this application shall be subject to the protection scope of the claims.

Claims (20)

  1. 一种解剖结构的自动测量方法,其特征在于,包括:An automatic measurement method of anatomical structure, characterized in that it comprises:
    图像获取步骤,获取超声图像,所述超声图像与生物体组织的至少一个解剖结构相关,所述至少一个解剖结构具有至少一个测量项;In an image acquisition step, an ultrasound image is acquired, the ultrasound image is related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item;
    测量项获取步骤,获取待测测量项;Steps for obtaining measurement items, obtaining the measurement items to be tested;
    图像识别步骤,从所述超声图像中识别出包含有所述待测测量项对应的测量目标的识别图像,所述识别图像为所述超声图像中的至少一个图像;An image recognition step of identifying, from the ultrasound image, a recognition image containing a measurement target corresponding to the measurement item to be measured, and the recognition image is at least one of the ultrasound images;
    定位步骤,在所述识别图像中对所述测量目标进行定位;A positioning step, positioning the measurement target in the recognition image;
    自动测量步骤,测量所述测量目标。In an automatic measurement step, the measurement target is measured.
  2. 一种解剖结构的自动测量方法,其特征在于,包括:An automatic measurement method of anatomical structure, characterized in that it comprises:
    图像获取步骤,获取超声图像,所述超声图像与生物体组织的至少一个解剖结构相关,所述至少一个解剖结构具有至少一个测量项;In an image acquisition step, an ultrasound image is acquired, the ultrasound image is related to at least one anatomical structure of the biological tissue, and the at least one anatomical structure has at least one measurement item;
    测量项获取步骤,获取待测测量项;Steps for obtaining measurement items, obtaining measurement items to be tested;
    图像识别步骤,从所述超声图像中识别出包含有所述待测测量项对应的测量目标的识别图像,其中,所述识别图像为所述超声图像中的至少一个图像;An image recognition step of identifying, from the ultrasound image, a recognition image containing a measurement target corresponding to the measurement item to be measured, wherein the recognition image is at least one of the ultrasound images;
    自动测量步骤,测量所述识别图像中的所述测量目标。In an automatic measurement step, the measurement target in the recognition image is measured.
  3. 根据权利要求1或2所述的自动测量方法,其特征在于,所述图像识别步骤包括:The automatic measurement method according to claim 1 or 2, wherein the image recognition step comprises:
    将所述超声图像的图像特征与预设数据库中包含所述待测测量项的数据库图像的图像特征进行对比,判断所述超声图像的图像特征与所述数据库图像的图像特征是否匹配,当所述超声图像的图像特征与所述数据库图像的图像特征匹配时,确定所述超声图像为所述识别图像。The image feature of the ultrasound image is compared with the image feature of the database image containing the measurement item to be measured in the preset database, and it is determined whether the image feature of the ultrasound image matches the image feature of the database image. When the image feature of the ultrasound image matches the image feature of the database image, it is determined that the ultrasound image is the recognition image.
  4. 根据权利要求1或2所述的自动测量方法,其特征在于,所述图像识别步骤包括:The automatic measurement method according to claim 1 or 2, wherein the image recognition step comprises:
    根据所述生物体组织的所述至少一个解剖结构所具有的测量项将所述超声图像进行分类,以获得包含有所述待测测量项对应的测量目标的所述超声图像;Classifying the ultrasound image according to the measurement items of the at least one anatomical structure of the biological tissue to obtain the ultrasound image containing the measurement target corresponding to the measurement item to be measured;
    从经过所述分类的超声图像中选择包含有所述待测测量项对应的测量 目标的所述超声图像。The ultrasound image containing the measurement target corresponding to the measurement item to be measured is selected from the classified ultrasound images.
  5. 根据权利要求4所述的自动测量方法,其特征在于,所述对所述超声图像进行分类的步骤包括:The automatic measurement method according to claim 4, wherein the step of classifying the ultrasound image comprises:
    将所述超声图像的图像特征与预设数据库中的数据库图像的图像特征进行对比,其中,所述数据库图像包含有至少一个所述生物体组织的至少一个所述解剖结构的任意一个所具有的测量项对应的测量目标,当所述超声图像的图像特征与所述数据库图像的图像特征匹配时,所述超声图像包含有所述数据库图像包含的测量目标。The image feature of the ultrasound image is compared with the image feature of the database image in the preset database, wherein the database image contains at least one of the biological tissues of any one of the at least one of the anatomical structures For the measurement target corresponding to the measurement item, when the image feature of the ultrasound image matches the image feature of the database image, the ultrasound image includes the measurement target included in the database image.
  6. 根据权利要求3所述的自动测量方法,其特征在于,所述测量项获取步骤中获取至少两个待测测量项,所述图像识别步骤还包括:The automatic measurement method according to claim 3, wherein at least two measurement items to be measured are obtained in the measurement item obtaining step, and the image recognition step further comprises:
    对所述识别图像进行分类,以获得包含有每一所述待测测量项对应的测量目标的所述识别图像。The recognition image is classified to obtain the recognition image including the measurement target corresponding to each measurement item to be measured.
  7. 一种解剖结构的自动测量方法,其特征在于,包括:An automatic measurement method of anatomical structure, characterized in that it comprises:
    图像获取步骤,获取至少两个超声图像,所述至少两个超声图像与生物体组织的至少一个解剖结构相关;An image acquisition step of acquiring at least two ultrasound images, the at least two ultrasound images being related to at least one anatomical structure of the biological tissue;
    图像识别步骤,从所述超声图像中识别出包含有测量项对应的测量目标的识别图像,其中,所述测量项为待测测量项,所述识别图像为所述超声图像中的至少一个图像;In an image recognition step, a recognition image containing a measurement target corresponding to a measurement item is recognized from the ultrasound image, wherein the measurement item is a measurement item to be measured, and the recognition image is at least one image in the ultrasound image ;
    定位步骤,在所述识别图像中对所述测量目标进行定位;A positioning step, positioning the measurement target in the recognition image;
    自动测量步骤,测量所述测量目标。In an automatic measurement step, the measurement target is measured.
  8. 一种解剖结构的自动测量方法,其特征在于,包括:An automatic measurement method of anatomical structure, characterized in that it comprises:
    图像获取步骤,获取至少两个超声图像,所述至少两个超声图像与生物体组织的至少一个解剖结构相关;An image acquisition step of acquiring at least two ultrasound images, the at least two ultrasound images being related to at least one anatomical structure of the biological tissue;
    图像识别步骤,从所述超声图像中识别出包含有测量项对应的测量目标的识别图像,其中,所述测量项为待测测量项,所述识别图像为所述超声图像中的至少一个图像;In an image recognition step, a recognition image containing a measurement target corresponding to a measurement item is recognized from the ultrasound image, wherein the measurement item is a measurement item to be measured, and the recognition image is at least one image in the ultrasound image ;
    自动测量步骤,测量所述测量目标。In an automatic measurement step, the measurement target is measured.
  9. 根据权利要求7或8所述的自动测量方法,其特征在于,所述生物体组织的至少一个解剖结构具有至少一个特征测量项,所述图像识别步骤包括:The automatic measurement method according to claim 7 or 8, wherein at least one anatomical structure of the biological tissue has at least one characteristic measurement item, and the image recognition step comprises:
    将所述超声图像的图像特征与预设数据库中包含的包含有所述至少一个特征测量项中的任意一个对应的测量目标的数据库图像的图像特征进行对比,判断当前所述超声图像的图像特征与所述数据库图像的图像特征是否匹配,当所述超声图像的图像特征与所述数据库图像的图像特征匹配时,确定当前所述超声图像为所述识别图像,其中,所述识别图像包含的测量目标对应于与所述识别图像的图像特征相匹配的所述数据库图像中包含的所述测量目标。Compare the image feature of the ultrasound image with the image feature of the database image containing the measurement target corresponding to any one of the at least one feature measurement item contained in the preset database, and determine the image feature of the current ultrasound image Whether it matches the image feature of the database image, when the image feature of the ultrasound image matches the image feature of the database image, it is determined that the current ultrasound image is the recognition image, wherein the recognition image contains The measurement target corresponds to the measurement target contained in the database image that matches the image feature of the recognition image.
  10. 根据权利要求7或8所述的自动测量方法,其特征在于,所述生物体组织的至少一个解剖结构中的任意一个或者两个具有至少两个特征测量项,所述识别图像中包含至少两个待测测量项分别对应的测量目标,所述图像识别步骤还包括:The automatic measurement method according to claim 7 or 8, wherein any one or two of the at least one anatomical structure of the biological tissue has at least two characteristic measurement items, and the identification image contains at least two Each of the measurement items to be measured corresponds to the measurement target, and the image recognition step further includes:
    对所述识别图像中的所述测量目标进行分类,以使所述识别图像中的测量目标与所述至少两个特征测量项一一对应。The measurement targets in the recognition image are classified, so that the measurement targets in the recognition image correspond to the at least two feature measurement items in a one-to-one correspondence.
  11. 根据权利要求1或7所述的自动测量方法,其特征在于,所述测量项定位步骤包括:The automatic measurement method according to claim 1 or 7, wherein the measurement item positioning step comprises:
    将所述识别图像的图像特征与预设数据库中的包含有对应于所述待测测量项的数据库图像的图像特征进行对比分析,以对所述识别图像中的所待测述测量项进行定位获得所述测量目标,其中,所述数据库图像包含有对应于所述待测测量项的标定结果,所述测量目标为与所述标定结果相一致的区域。Compare and analyze the image feature of the recognition image with the image feature of the database image corresponding to the measurement item to be measured in the preset database, so as to locate the measurement item to be measured in the recognition image The measurement target is obtained, wherein the database image includes a calibration result corresponding to the measurement item to be measured, and the measurement target is an area consistent with the calibration result.
  12. 根据权利要求1或7所述的自动测量方法,其特征在于,所述识别图像中包括至少两个所述待测测量项对应的所述测量目标,所述测量项定位步骤还包括:The automatic measurement method according to claim 1 or 7, wherein the identification image includes at least two measurement targets corresponding to the measurement items to be measured, and the measurement item positioning step further comprises:
    对所述测量目标进行分类,以使所述测量目标与所述至少两个所述待测测量项一一对应。The measurement target is classified, so that the measurement target corresponds to the at least two measurement items to be measured in a one-to-one correspondence.
  13. 根据权利要求12所述的自动测量方法,其特征在于,对所述测量目标进行分类的步骤包括:The automatic measurement method according to claim 12, wherein the step of classifying the measurement target comprises:
    将所述测量目标的图像特征与预设数据库中的包含的所述数据库图像中与所述待测测量项对应的测量目标相应的所述标定结果的图像特征进行对比,以判断所述测量目标的图像特征与所述标定结果的图像特征是否匹 配,当所述测量目标的图像特征与所述标定结果的图像特征匹配时,判断所述测量目标为对应于所述待测测量项。The image feature of the measurement target is compared with the image feature of the calibration result corresponding to the measurement target corresponding to the measurement item to be measured in the database images contained in the preset database to determine the measurement target Whether the image feature of the measurement target matches the image feature of the calibration result, and when the image feature of the measurement target matches the image feature of the calibration result, it is determined that the measurement target corresponds to the measurement item to be measured.
  14. 根据权利要求11所述的自动测量方法,其特征在于,所述标定结果包括与所述测量目标的ROI框,所述定位步骤包括:The automatic measurement method according to claim 11, wherein the calibration result includes an ROI frame with the measurement target, and the positioning step includes:
    采用基于滑窗的方法提取滑窗内的图像特征,将所述滑窗内的图像特征与所述标定结果的图像特征进行对比,判断所述滑窗内的图像特征与所述标定结果的图像特征是否匹配,当所述滑窗内的图像特征与所述标定结果的图像特征匹配时,确定当前滑窗为所述测量目标。Using a sliding window-based method to extract the image features in the sliding window, compare the image features in the sliding window with the image features of the calibration result, and judge the image features in the sliding window and the image of the calibration result Whether the features match, when the image feature in the sliding window matches the image feature of the calibration result, it is determined that the current sliding window is the measurement target.
  15. 根据权利要求11所述的自动测量方法,其特征在于,所述标定结果包括与所述测量目标的ROI框,所述测量项定位步骤包括:The automatic measurement method according to claim 11, wherein the calibration result includes an ROI frame with the measurement target, and the measurement item positioning step includes:
    根据包含有所述测量目标的所述数据库图像中的所述标定结果的图像特征,对所述识别图像进行边框回归处理以获得边框区域,所述边框区域为所述测量目标。According to the image feature of the calibration result in the database image containing the measurement target, frame regression processing is performed on the recognition image to obtain a frame area, and the frame area is the measurement target.
  16. 根据权利要求11所述的自动测量方法,其特征在于,所述标定结果包括对所述测量目标进行精确分割的掩膜,所述测量项定位步骤包括:The automatic measurement method according to claim 11, wherein the calibration result includes a mask for accurately segmenting the measurement target, and the measurement item positioning step includes:
    根据包含有所述测量目标的所述数据库图像中的所述标定结果的图像特征,采用语义分割算法识别所述识别图像中与所述标定结果相一致的所述测量目标的分割掩膜。According to the image feature of the calibration result in the database image containing the measurement target, a semantic segmentation algorithm is used to identify the segmentation mask of the measurement target that is consistent with the calibration result in the recognition image.
  17. 根据权利要求1或7所述的自动测量方法,其特征在于,所述自动测量步骤包括:The automatic measurement method according to claim 1 or 7, wherein the automatic measurement step comprises:
    对所述测量目标进行目标拟合得到所述测量目标的拟合方程;Performing target fitting on the measurement target to obtain a fitting equation of the measurement target;
    通过所述拟合方程确定所述测量目标的测量结果。The measurement result of the measurement target is determined by the fitting equation.
  18. 根据权利要求2或8所述的自动测量方法,其特征在于,所述自动测量步骤包括:The automatic measurement method according to claim 2 or 8, wherein the automatic measurement step comprises:
    通过边缘检测算法提取所述测量目标对应的轮廓;Extracting the contour corresponding to the measurement target through an edge detection algorithm;
    对所述测量目标对应的轮廓进行拟合,以获得所述测量目标对应的拟合方程;Fitting a contour corresponding to the measurement target to obtain a fitting equation corresponding to the measurement target;
    通过所述拟合方程确定所述测量目标的测量结果。The measurement result of the measurement target is determined by the fitting equation.
  19. 根据权利要求1、2、7或8所述的自动测量方法,其特征在于,还包括:The automatic measurement method according to claim 1, 2, 7 or 8, characterized in that it further comprises:
    测量结果显示步骤,控制显示对所述测量目标进行测量后的测量结果。The measurement result display step controls and displays the measurement result after the measurement target is measured.
  20. 一种超声成像系统,其特征在于,包括:An ultrasound imaging system, characterized in that it comprises:
    超声探头,用于向生物体组织发射超声波并接收超声回波,得到超声回波信号;Ultrasound probe, used to transmit ultrasonic waves to biological tissues and receive ultrasonic echoes to obtain ultrasonic echo signals;
    处理器,用于对超声回波信号进行处理,得到所述生物体组织的超声图像;A processor, configured to process the ultrasonic echo signal to obtain an ultrasonic image of the biological tissue;
    显示器,用于显示所述超声图像;A display for displaying the ultrasound image;
    存储器,用于存储可执行的程序指令;Memory, used to store executable program instructions;
    所述处理器,还用于执行所述可执行的程序指令,以使所述处理器执行如权利要求1-19任意一项所述的自动测量方法。The processor is further configured to execute the executable program instructions, so that the processor executes the automatic measurement method according to any one of claims 1-19.
PCT/CN2019/126388 2019-12-18 2019-12-18 Automatic measurement method and ultrasonic imaging system for anatomical structure WO2021120065A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/126388 WO2021120065A1 (en) 2019-12-18 2019-12-18 Automatic measurement method and ultrasonic imaging system for anatomical structure
CN202011506495.7A CN112998755A (en) 2019-12-18 2020-12-18 Method for automatic measurement of anatomical structures and ultrasound imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/126388 WO2021120065A1 (en) 2019-12-18 2019-12-18 Automatic measurement method and ultrasonic imaging system for anatomical structure

Publications (1)

Publication Number Publication Date
WO2021120065A1 true WO2021120065A1 (en) 2021-06-24

Family

ID=76383499

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126388 WO2021120065A1 (en) 2019-12-18 2019-12-18 Automatic measurement method and ultrasonic imaging system for anatomical structure

Country Status (2)

Country Link
CN (1) CN112998755A (en)
WO (1) WO2021120065A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113974688B (en) * 2021-09-18 2024-04-16 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and ultrasonic imaging system
CN114376614B (en) * 2021-11-08 2024-03-12 中国医科大学附属第一医院 Auxiliary method for carotid artery ultrasonic measurement and ultrasonic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150173650A1 (en) * 2013-12-20 2015-06-25 Samsung Medison Co., Ltd. Method and apparatus for indicating point whose location has been adjusted based on type of caliper in medical image
CN105555198A (en) * 2014-03-20 2016-05-04 深圳迈瑞生物医疗电子股份有限公司 Method and device for automatic identification of measurement item, and ultrasound imaging apparatus
WO2018129737A1 (en) * 2017-01-16 2018-07-19 深圳迈瑞生物医疗电子股份有限公司 Method for measuring parameters in ultrasonic image and ultrasonic imaging system
CN109044398A (en) * 2018-06-07 2018-12-21 深圳华声医疗技术股份有限公司 Ultrasonic system imaging method, device and computer readable storage medium
CN109276275A (en) * 2018-10-26 2019-01-29 深圳开立生物医疗科技股份有限公司 A kind of extraction of ultrasound image standard section and measurement method and ultrasonic diagnostic equipment
CN109589140A (en) * 2018-12-26 2019-04-09 深圳开立生物医疗科技股份有限公司 A kind of ultrasonic measurement entry processing method and compuscan

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150173650A1 (en) * 2013-12-20 2015-06-25 Samsung Medison Co., Ltd. Method and apparatus for indicating point whose location has been adjusted based on type of caliper in medical image
CN105555198A (en) * 2014-03-20 2016-05-04 深圳迈瑞生物医疗电子股份有限公司 Method and device for automatic identification of measurement item, and ultrasound imaging apparatus
WO2018129737A1 (en) * 2017-01-16 2018-07-19 深圳迈瑞生物医疗电子股份有限公司 Method for measuring parameters in ultrasonic image and ultrasonic imaging system
CN109044398A (en) * 2018-06-07 2018-12-21 深圳华声医疗技术股份有限公司 Ultrasonic system imaging method, device and computer readable storage medium
CN109276275A (en) * 2018-10-26 2019-01-29 深圳开立生物医疗科技股份有限公司 A kind of extraction of ultrasound image standard section and measurement method and ultrasonic diagnostic equipment
CN109589140A (en) * 2018-12-26 2019-04-09 深圳开立生物医疗科技股份有限公司 A kind of ultrasonic measurement entry processing method and compuscan

Also Published As

Publication number Publication date
CN112998755A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
JP6467041B2 (en) Ultrasonic diagnostic apparatus and image processing method
CN110811691B (en) Method and device for automatically identifying measurement items and ultrasonic imaging equipment
US11229419B2 (en) Method for processing 3D image data and 3D ultrasonic imaging method and system
CN111629670B (en) Echo window artifact classification and visual indicator for ultrasound systems
EP2298176A1 (en) Medical image processing device and method for processing medical image
CN112263236B (en) System and method for intelligent evaluation of whole-body tumor MRI
JP2017525445A (en) Ultrasonic imaging device
WO2020133510A1 (en) Ultrasonic imaging method and device
CN111374712B (en) Ultrasonic imaging method and ultrasonic imaging equipment
US20160000401A1 (en) Method and systems for adjusting an imaging protocol
WO2021120065A1 (en) Automatic measurement method and ultrasonic imaging system for anatomical structure
CN110604592A (en) Hip joint imaging method and hip joint imaging system
CN107767386B (en) Ultrasonic image processing method and device
US20220249060A1 (en) Method for processing 3d image data and 3d ultrasonic imaging method and system
CN110604594A (en) Hip joint imaging method and hip joint imaging system
KR101144867B1 (en) 3d ultrasound system for scanning inside human body object and method for operating 3d ultrasound system
CN113017695A (en) Ultrasound imaging method, system and computer readable storage medium
WO2022141085A1 (en) Ultrasonic detection method and ultrasonic imaging system
US20230326017A1 (en) System and method for automatically measuring spinal parameters
Khazendar Computer-aided diagnosis of gynaecological abnormality using B-mode ultrasound images
JP2024512008A (en) Methods used for ultrasound imaging
CN115644921A (en) Automatic elasticity measurement method
CN115919367A (en) Ultrasonic image processing method and device, electronic equipment and storage medium
CN116138807A (en) Ultrasonic imaging equipment and ultrasonic detection method of abdominal aorta
CN113040746A (en) Intelligent fetal growth and development detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19956360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19956360

Country of ref document: EP

Kind code of ref document: A1