WO2020133510A1 - 一种超声成像方法及设备 - Google Patents

一种超声成像方法及设备 Download PDF

Info

Publication number
WO2020133510A1
WO2020133510A1 PCT/CN2018/125832 CN2018125832W WO2020133510A1 WO 2020133510 A1 WO2020133510 A1 WO 2020133510A1 CN 2018125832 W CN2018125832 W CN 2018125832W WO 2020133510 A1 WO2020133510 A1 WO 2020133510A1
Authority
WO
WIPO (PCT)
Prior art keywords
endometrium
image
volume data
region
interest
Prior art date
Application number
PCT/CN2018/125832
Other languages
English (en)
French (fr)
Inventor
韩笑
董国豪
邹耀贤
林穆清
金涛
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to CN201880097250.8A priority Critical patent/CN112672691B/zh
Priority to CN202311266520.2A priority patent/CN117338340A/zh
Priority to PCT/CN2018/125832 priority patent/WO2020133510A1/zh
Priority to CN202311248499.3A priority patent/CN117338339A/zh
Publication of WO2020133510A1 publication Critical patent/WO2020133510A1/zh
Priority to US17/359,615 priority patent/US20210393240A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/58Testing, adjusting or calibrating the diagnostic device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52023Details of receivers
    • G01S7/52036Details of receivers using analysis of echo signal for target characterisation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52053Display arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/162Segmentation; Edge detection involving graph-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8993Three dimensional imaging systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the embodiments of the present invention relate to the technical field of ultrasound imaging, and in particular, to an ultrasound imaging method and device, and a computer-readable storage medium.
  • ultrasound technology has become the most widely used, most frequently used, and most widely used examination method because of its high reliability, fast and convenient, real-time imaging, and repeatable examinations.
  • development based on artificial intelligence assisted technology has further promoted the application of ultrasound technology in clinical diagnosis and treatment.
  • Gynecological ultrasound examination is one of the relatively important and widely used fields in ultrasound diagnosis. Among them, the ultrasound examination of the uterus and its accessories can provide important guidance for the diagnosis and treatment of many gynecological diseases. Because three-dimensional ultrasound can display the coronal section of the uterus, it clearly shows whether the endometrium is damaged and the shape is complete. Therefore, the use of three-dimensional ultrasound technology to diagnose uterine-related gynecological diseases is of great significance.
  • the three-dimensional ultrasound technology has the above advantages, because the coordinate axes of the three-dimensional volume image are easy to be confused, and due to various changes in the orientation of the uterus and the relatively abstract three-dimensional space, doctors manually search for the uterus site and determine the standard endometrial section.
  • doctors manually search for the uterus site and determine the standard endometrial section.
  • it may be necessary to rotate the three-dimensional volume image repeatedly to find the standard endometrial section one by one.
  • This manual positioning process is not only time-consuming and laborious, but also has limited imaging intelligence and accuracy.
  • An embodiment of the present invention provides an ultrasound imaging method.
  • the method includes:
  • Endometrium is identified from the three-dimensional volume data of the uterus area according to the image features of the endometrium of the uterus area, and position information of the endometrium is obtained;
  • the endometrial image is displayed.
  • An embodiment of the present invention also provides an ultrasound imaging method, including:
  • the image of the region of interest is displayed.
  • An embodiment of the present invention provides an ultrasound imaging device.
  • the ultrasound imaging device includes:
  • a transmitting circuit used to excite the probe to transmit ultrasonic waves to the object to be detected for body scanning
  • a receiving circuit configured to receive the ultrasonic echo returned from the object to be detected through the probe, thereby obtaining an ultrasonic echo signal/data
  • a beam synthesis circuit configured to perform beam synthesis processing on the ultrasound echo signal/data to obtain the ultrasound echo signal/data after beam synthesis;
  • the processor is used to process the ultrasonic echo signal after the beam synthesis to obtain three-dimensional volume data of the uterine area of the object to be detected; according to the image characteristics of the endometrium of the uterine area, the Identify the endometrium in the three-dimensional volume data to obtain the position information of the endometrium; according to the position information of the endometrium, perform endometrial imaging based on the three-dimensional volume data to obtain an endometrial image;
  • a display for displaying the endometrial image is a display for displaying the endometrial image.
  • An embodiment of the present invention provides a computer-readable storage medium that stores an ultrasound imaging program, and the ultrasound imaging program may be executed by a processor to implement the above-mentioned ultrasound imaging method.
  • Embodiments of the present invention provide an ultrasound imaging method and device, and a computer-readable storage medium.
  • the ultrasound imaging device can automatically obtain the position information of the endometrium according to the image characteristics of the endometrium, eliminating the need for Users continue to manually perform the tedious operation of endometrial positioning, which is convenient for users to quickly identify the endometrium and improve the overall work efficiency; ultrasound imaging equipment can also be based on
  • Endometrial position information is automatically imaged to obtain endometrial images.
  • the accuracy of subsequent ultrasound imaging is improved, and the automatic imaging also improves the intelligence of ultrasound image imaging.
  • FIG. 1 is a schematic structural block diagram of an ultrasound imaging device provided by an embodiment of the present invention.
  • FIG. 2 is a flowchart 1 of an ultrasound imaging method provided by an embodiment of the present invention.
  • FIG. 3 is a block diagram 1 of an exemplary ultrasound imaging process provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an exemplary VOI block provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an exemplary CMPR imaging process provided by an embodiment of the present invention.
  • FIG. 8 is a second block diagram of an exemplary ultrasound imaging process provided by an embodiment of the present invention.
  • FIG. 9 is a flowchart 2 of an ultrasound imaging method according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a cross-sectional image of an endometrium provided by an embodiment of the present invention.
  • FIG. 1 is a schematic structural block diagram of an ultrasound imaging device in an embodiment of the present invention.
  • the ultrasound imaging apparatus 10 may include a probe 100, a transmission circuit 101, a transmission/reception selection switch 102, a reception circuit 103, a beam synthesis circuit 104, a processor 105, and a display 106.
  • the transmitting circuit 101 can excite the probe 100 to transmit ultrasonic waves to the target tissue;
  • the receiving circuit 103 can receive the ultrasonic echo returned from the object to be detected through the probe 100 to obtain the ultrasonic echo signal/data; the ultrasonic echo signal/data is subjected to beam synthesis
  • the circuit 104 performs beam synthesis processing, it is sent to the processor 105.
  • the processor 105 processes the ultrasound echo signal/data to obtain an ultrasound image of the object to be detected.
  • the ultrasound image obtained by the processor 105 may be stored in the memory 107. These ultrasound images can be displayed on the display 106.
  • the display 106 of the aforementioned ultrasound imaging device 10 may be a touch display screen, a liquid crystal display screen, etc., or may be an independent display device independent of the ultrasound imaging device 10, such as a liquid crystal display, a television, etc. It can also be the display screen on electronic devices such as mobile phones and tablet computers, etc.
  • the processor 105 may be an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), programmable logic At least one of a device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor, , So that the processor 105 can execute the corresponding steps of the ultrasound imaging method in various embodiments of the present invention.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • programmable logic At least one of a device Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor, , So that
  • the memory 107 may be volatile memory (volatile memory), such as random access memory (Random Access Memory, RAM); or non-volatile memory (non-volatile memory), such as read-only memory (Read Only Memory, ROM) , Flash memory (flash memory), hard disk (Hard Disk Drive, HDD) or solid state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to the processor.
  • volatile memory such as random access memory (Random Access Memory, RAM); or non-volatile memory (non-volatile memory), such as read-only memory (Read Only Memory, ROM) , Flash memory (flash memory), hard disk (Hard Disk Drive, HDD) or solid state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to the processor.
  • volatile memory such as random access memory (Random Access Memory, RAM)
  • non-volatile memory such as read-only memory (Read Only Memory,
  • An embodiment of the present invention provides an ultrasound imaging method. As shown in FIG. 2, the method may include:
  • the ultrasound imaging device may transmit ultrasonic waves to the uterine area of the object to be detected through the probe to realize ultrasonic scanning and inspection of the uterine area, which is used in the scenario of detecting the uterine area.
  • the object to be detected may be an object including a uterus area such as a human organ or a human tissue structure, where the uterus area is an area containing all or part of the uterus, or all or part of the uterus and uterine accessories.
  • the ultrasound imaging device may identify the key anatomical structure of the uterus area, and characterize the uterine area by the position of the key anatomical structure.
  • the key anatomical structure of the uterine area here can be the endometrium. Therefore, the embodiments of the present invention characterize the ultrasound image of the uterine region by identifying the position of the endometrium.
  • S102 Receive an ultrasound echo returned from the uterine area of the object to be detected, and acquire an ultrasound echo signal based on the ultrasound echo.
  • S103 Process the ultrasonic echo signal to obtain three-dimensional volume data of the uterine region of the object to be detected.
  • the receiving circuit of the ultrasound imaging device can receive the ultrasound echo returned from the uterine area of the object to be detected through the probe, thereby obtaining the ultrasound echo signal/data; the ultrasound echo signal/data is subjected to beam synthesis processing by the beam synthesis circuit and sent Into the processor.
  • the processor of the ultrasound imaging device performs signal processing and three-dimensional reconstruction on the ultrasound echo signal/data to obtain three-dimensional volume data of the uterine region of the object to be detected.
  • the transmitting circuit sends a set of delayed focused pulses to the probe, the probe transmits ultrasonic waves to the body tissue of the object to be detected, and receives reflection from the body tissue of the object to be detected after a certain delay
  • the returned ultrasonic echo with tissue information is converted back into an electrical signal.
  • the receiving circuit receives the electrical signal (ultrasonic echo signal) and sends the ultrasonic echo signal to the beam synthesis circuit.
  • the echo signal completes the focus delay, weighting, and channel summation in the beam synthesis circuit, and then undergoes signal processing through the signal processing module (ie, processor), and then sends the processed signal to the 3D reconstruction module (ie, processor), after After the image is drawn and rendered, it is processed to obtain an ultrasonic image of visualization information, which is then transmitted to the display to display the ultrasonic image.
  • the signal processing module ie, processor
  • the 3D reconstruction module ie, processor
  • the ultrasonic imaging device may perform feature extraction and feature extraction on the three-dimensional volume data of the uterine area according to the image features of the endometrium of the uterine area Compare to identify the endometrium, and then get the position information of the endometrium.
  • the ultrasound imaging device needs to identify which anatomical structures are related to the endometrium to be determined. For example, in the volume data of the uterus area, there is a clear difference between the echo of the endometrium and the echo of the surrounding tissues. At the same time, with the change of the female physiological cycle, the shape of the endometrium also shows periodic changes, and the characteristics are more obvious, so The endometrium can be used as the key anatomical structure of the uterus area to determine the endometrial section.
  • the detection of key anatomical structures in the uterine region includes but is not limited to the endometrium.
  • the endometrium and the uterine basal tissue have different reflection capabilities for ultrasound, and the grayscale characteristics of the corresponding ultrasound echo signals are different. Therefore, the ultrasound imaging device can be based on the endometrium and The difference in the image characteristics of the uterine basal tissue recognizes the endometrium from the three-dimensional volume data of the uterine area. The ultrasound imaging device can determine the boundary between the endometrium and the uterine basal tissue according to the difference in gray values, thereby identifying the endometrium in the three-dimensional volume data. In some embodiments of the present invention, as the female physiological cycle changes, the morphology of the endometrium also exhibits periodic changes.
  • the ultrasound imaging device can be based on the cyclically changeable morphological characteristics of the endometrium in the uterine area.
  • the endometrium is identified in the three-dimensional volume data of the uterus area, and the position information of the endometrium is obtained.
  • the ultrasound imaging device can identify the endometrium from the three-dimensional volume data of the uterine area based on the morphological characteristics of the endometrium in different periods of the physiological cycle. The details will be described below.
  • the identification method of endometrium and other key anatomical structures may be manual or automatic.
  • the user can inform the type and position of the key anatomical structure by using a keyboard, a mouse, and other tools, and dotting and drawing lines on the specific anatomical structure in the three-dimensional volume data through a certain workflow.
  • the method of automatically identifying the endometrium is adopted. Automatically identifying the endometrium refers to extracting features of the three-dimensional volume data, and using the features to automatically detect the position of the endometrium in the three-dimensional volume data.
  • the method of automatically identifying key anatomical structures is divided into two cases: one is to directly determine the spatial position of the endometrium in the three-dimensional volume data; the other is to detect the uterus in the cut plane of the three-dimensional volume data
  • the endometrium determines the position of the endometrium in the three-dimensional volume data according to the position of the slice position in the three-dimensional volume data and the position of the endometrium in the slice.
  • the endometrium and other key anatomical structure positions can be expressed by using an ROI (region of interest) box to enclose the anatomical position, or by accurately segmenting the boundaries of the anatomical structure, or by using one or more
  • ROI region of interest
  • the process of determining the spatial position of the endometrium in the three-dimensional volume data to obtain the most standard endometrial section can be based on gray and/or morphological feature detection methods to achieve the detection of the endometrium ; You can also use machine learning or deep learning methods to detect or accurately segment the endometrium in the three-dimensional volume data, the embodiment of the present invention is not limited.
  • the ultrasound imaging device recognizes the endometrium from the three-dimensional volume data of the uterine area according to the image characteristics of the endometrium of the uterine area, and the implementation manner of obtaining the position information of the endometrium may include the following There are several types, which are not limited in the embodiments of the present invention.
  • the ultrasound imaging device performs preset feature extraction on the three-dimensional volume data of the uterus region to obtain at least one candidate region of interest; acquiring three-dimensional template data of the uterine region where the endometrium has been identified, according to the Three-dimensional template data to obtain the preset template area of the endometrium; match at least one candidate interest area and the preset template area, and identify the candidate area of interest with the highest matching degree as the target area of the endometrium to be detected And, according to the position of the target area of the endometrium in the three-dimensional volume data, the position information of the endometrium is obtained.
  • the preset feature may be a morphological feature.
  • the ultrasonic imaging device performs binary segmentation on the three-dimensional volume data of the uterus region, and performs morphological operation processing on the binary segmentation result, thereby obtaining at least one candidate sense with a complete boundary Area of interest.
  • the morphological operation here may be, for example, dilation processing or corrosion processing on the binary segmentation result.
  • the dilation process can expand the edge of the binarized segmentation result to a certain extent. Corrosion treatment can reduce the result of binary segmentation.
  • the ultrasound imaging device matches at least one candidate region of interest with a preset template region, and identifies the candidate region of interest with the highest matching degree as the specific target region of the endometrium of the object to be detected
  • the implementation may be: extracting a feature index of at least one candidate region of interest, where the feature index includes shape features, texture features, boundary features, or grayscale distribution features; based on the feature index, calculating the correlation of at least one candidate region of interest with the preset template region And the candidate region of interest with the highest correlation and the correlation exceeding a preset threshold is taken as the target region of the endometrium of the object to be detected.
  • the manner of calculating the correlation between the at least one candidate region of interest and the preset template region based on the feature index is not limited in this embodiment of the present invention, and may be feature matching or feature difference.
  • the preset threshold may be 90%, and the specific embodiment of the present invention is not limited.
  • the ultrasound imaging device may obtain the three-dimensional template data of the uterine region that has identified the endometrium in advance, obtain the preset template region of the endometrium according to the three-dimensional template data, and then combine at least one candidate region of interest and the preset The template regions are matched, and the candidate region of interest with the highest matching degree is identified as the target region of the endometrium of the object to be detected.
  • the ultrasound imaging device performs shape feature extraction on the three-dimensional volume data to obtain at least one candidate region of interest with different shape features from the uterine region; the shape feature corresponding to the at least one candidate region of interest and the shape of the preset template region
  • the features are compared to obtain at least one comparison result; at least one comparison result corresponds to at least one candidate region of interest; the candidate region of interest corresponding to the highest comparison result among at least one comparison result is identified as the endometrium (ie, the target Area); from the three-dimensional ultrasound image data, obtain position information of the endometrium (ie, the position of the target area in the three-dimensional volume data).
  • the ultrasound imaging device may also use other grayscale detection and segmentation methods, such as Otsu threshold (OTSU), level set (LevelSet), graph cut (Graph Cut), Snake, etc. to achieve the goal of endometrium
  • Otsu threshold Otsu threshold
  • LevelSet level set
  • Graph Cut graph cut
  • Snake etc.
  • the detection of the endometrium can be achieved based on machine learning or deep learning methods.
  • the ultrasound imaging equipment is first trained through a series of training samples to establish a preset positioning model, and then based on the features learned by training, the three-dimensional volume data of the uterus area are classified and returned to obtain the uterus The position information of the intima in the three-dimensional volume data.
  • the ultrasound imaging device acquires a preset positioning model, the preset positioning model includes three-dimensional positive sample data of the uterine region where the endometrium has been identified, and calibration information of the endometrium in the three-dimensional positive sample data; based on the preset positioning model
  • the calibration information of the endometrium identifies the endometrium from the three-dimensional volume data of the uterine area of the object to be detected, and locates the position information of the endometrium.
  • a method for positioning and identifying a target area may be machine learning or deep learning to detect or accurately segment key anatomical structures (eg, endometrium) in three-dimensional volume data. For example, you can first learn the characteristics or laws of the target area (positive sample: endometrial area) and non-target areas (negative sample: background area) in the database, and then based on the learned features or laws, the key anatomical structure of other images Perform positioning and identification.
  • key anatomical structures eg, endometrium
  • the positive positioning and the negative sampling are used here to train the preset positioning model, and a more comprehensive and accurate model can be obtained, thereby improving the accuracy of recognition.
  • the preset positioning model includes the three-dimensional positive sample data of the uterine region where the endometrium has been identified, and the calibration information of the endometrium in the three-dimensional positive sample data, the preset positioning The model is obtained by using machine learning or deep learning methods for model training.
  • the three-dimensional positive sample data here refers to the characteristic volume data containing the endometrium.
  • the process of the ultrasound imaging device obtaining the preset positioning model through model training is as follows: the ultrasound imaging device obtains the three-dimensional training volume data of at least two objects to be trained, and the three-dimensional training volume data includes at least the identified uterus The three-dimensional positive sample data of the endometrial uterus area; the endometrium or the associated anatomical structure of the endometrium is marked in the three-dimensional training volume data as the calibration information of the endometrium in the three-dimensional training volume data; and, based on the three-dimensional The training volume data and the calibration information of the endometrium are trained using machine learning or deep learning methods to obtain a preset positioning model.
  • the preset positioning model represents the correspondence between the three-dimensional volume data and the calibration information.
  • the three-dimensional training volume data and endometrial calibration information are calibration results of multiple copies of endometrial volume data and key anatomical structures.
  • the calibration result can be set according to the actual task needs, it can be a region of interest (ROI) box containing the target, or it can be a mask that accurately divides the endometrial region.
  • ROI region of interest
  • the ultrasound imaging device uses the calibration information of the endometrium in the preset positioning model to learn the image feature law of the endometrium through deep learning or machine learning; based on the image feature of the endometrium Regularly, the target area containing the endometrium is extracted from the three-dimensional volume data of the uterine area of the object to be detected, and the position information of the target area in the three-dimensional volume data is output as the position information of the endometrium.
  • the ultrasound imaging equipment can be divided into two steps to identify the endometrium: 1. Obtain a database, which contains multiple three-dimensional training volume data and the corresponding endometrial calibration results. Among them, the endometrial The calibration result can be set according to the actual task needs, it can be a ROI (region of interest) frame containing the endometrium, or a Mask (mask) for accurate segmentation of the endometrium; 2, positioning and identification, that is The machine learning algorithm is used to learn the characteristics or laws of the endometrium target area and the non-endometrial area in the database to realize the recognition and positioning of the interest area of the ultrasound image.
  • deep learning or machine learning methods include: sliding window-based methods, deep learning-based Bounding-Box methods, deep learning-based end-to-end semantic segmentation network methods, and the use of the above methods to calibrate the endometrial target area , And design a classifier to classify and judge the region of interest according to the calibration result, and specifically select according to the actual situation.
  • the embodiments of the present application do not make specific limitations.
  • the sliding window-based method may be: first, feature extraction is performed on the area within the sliding window, and the feature extraction method may be principal component analysis (PCA), linear discriminant analysis (Linear Discriminant Analysis, LDA), and Harr feature , Texture features, etc., you can also use deep neural networks for feature extraction, and then match the extracted features with the database, use k-nearest neighbor classification algorithm (k-NearestNeighbor, KNN), support vector machine (Support Vector Machine, SVM ), discriminators such as random forest, neural network, etc. classify to determine whether the current sliding window is the target area of the endometrium and obtain its corresponding category.
  • PCA principal component analysis
  • LDA Linear Discriminant Analysis
  • Harr feature Texture features, etc.
  • the Bounding-Box method based on deep learning can be: by stacking the base layer convolution layer and the fully connected layer to learn the characteristics of the constructed database and the regression of parameters, for the input 3D volume data, it can be directly returned through the network Corresponding to the Bounding-Box of the target area of the endometrium, and at the same time obtain the type of tissue structure in the target area of the endometrium.
  • Common networks include Regional Convolutional Neural Network (R-CNN), fast Regional Convolutional Neural Network (Fast R-CNN), Faster-RCNN, SSD (single shot multibox detector), YOLO, etc.
  • the end-to-end semantic segmentation network method based on deep learning may be: by stacking any one of the base layer convolutional layer, upsampling, or deconvolution layer to perform feature learning and parameter regression on the constructed database, for
  • the Bounding-Box corresponding to the target area of the endometrium can be directly returned through the network, where either the upsampling or deconvolution layer is added to make the input and output the same size, so as to directly obtain the input
  • the target area of the endometrium and its corresponding categories of data Common networks include FCN, U-Net, Mask R-CNN, etc.
  • the above three methods can also be used to first calibrate the target area of the endometrium, and then design a classifier to classify the target area of the endometrium according to the calibration result.
  • the method of classification determination is: first, the target ROI or Mask Perform feature extraction.
  • Feature extraction methods can be PCA, LDA, Haar features, texture features, etc., or deep neural networks can be used for feature extraction, and then the extracted features are matched with the database.
  • KNN, SVM, random forest, Discriminators such as neural networks are used for classification.
  • a series of cross-sectional image data may be first extracted from the three-dimensional volume data, and then the endometrium may be detected based on the cross-sectional image data.
  • the ultrasound imaging device obtains the sagittal image data including the endometrium from the three-dimensional volume data of the uterine area; based on the sagittal image data, the center point of the endometrium is determined; based on the center point, the The zonal image data is orthogonal and identifies the cross-sectional image data including the endometrium; based on the identification of the cross-sectional image data and the sagittal image data including the endometrium in the three-dimensional volume data of the uterus area Position to get the position information of the endometrium.
  • the three-dimensional volume data is obtained by performing ultrasonic scanning on the uterine region, and then the cross-sectional image formed based on the three-dimensional volume data may have multiple cross-sectional images including the endometrium. Therefore, the ultrasonic imaging device can detect the endometrium of the partial cross-sectional image in the three-dimensional volume data, and the purpose of automatic endometrial imaging can also be achieved.
  • the doctor when the ultrasound imaging device performs three-dimensional volume data collection, the doctor usually scans the uterus area with the sagittal plane as the starting slice to obtain three-dimensional volume data.
  • the method for detecting the endometrium based on the profile is to first obtain sagittal image data (A plane) that includes the endometrium from the three-dimensional volume data, and obtain the sagittal image from the sagittal image data
  • the center point of the middle endometrium, at which the cross-sectional image data (plane B) containing the endometrium orthogonal to the sagittal plane data is determined. Through the detection of the A and B planes, you can know the position of the endometrium in the two orthogonal planes.
  • this position does not include all the target areas of the endometrium, it can also be approximated to express the three-dimensional volume data of the endometrium.
  • the position in the space so that it can be automatically imaged according to the position information of the endometrium.
  • the three-dimensional accurate endometrium is not directly detected in the three-dimensional volume data, it only needs to be performed on a few slice images (such as sagittal plane images and cross-sectional images) Automatic detection can get the approximate position information of the endometrium, which greatly saves the amount of calculation.
  • the ultrasound imaging device corrects the situation of flipping that may occur during image acquisition by acquiring cross-sectional image data on the basis of acquiring sagittal image data.
  • the method for detecting the endometrium based on the cross-sectional image data is similar to the method for detecting the spatial position of the endometrium in the three-dimensional volume data, and can also be detected by grayscale and/or morphological features
  • the method and machine learning or deep learning algorithm are implemented, which will not be repeated here.
  • the purpose is to obtain the endometrium in the three-dimensional volume data
  • the location of the image is used as the basis for subsequent imaging.
  • the ultrasound device may perform endometrial imaging based on the three-dimensional volume data according to the position information of the endometrium to obtain an endometrial image.
  • the ultrasound imaging device can automatically obtain the target volume data related to the endometrium from the three-dimensional volume data according to the position information, and then combine the selected The imaging method of the target image data is subject to image reconstruction and other processing to obtain the corresponding ultrasound image.
  • the ultrasound imaging device recognizes the position information of the endometrium in the uterine area, that is, after identifying the key anatomical structure of the uterine area, it can realize the endometrium according to the position of the key anatomical structure in the three-dimensional volume data Automatic imaging.
  • the ultrasonic imaging device of the present invention is a three-dimensional imaging system, which can realize three ways of automatic endometrial imaging: endometrial VR imaging, endometrial CMPR imaging, and endometrial standard cut plane imaging.
  • the specific imaging mode of the present invention is not limited.
  • the ultrasound imaging device can extract the sagittal section image containing the endometrium from the three-dimensional volume data according to the position information of the endometrium, and then perform VR imaging and CMPR imaging based on the sagittal section image .
  • the VR imaging performed by the ultrasound imaging device renders the area within the VOI (Volume of interest) box, which is usually a rectangular parallelepiped.
  • VOI Volume of interest
  • a plane in the cuboid can also be turned into a curved surface, and the curved surface can better conform to the curved structure of the endometrium.
  • the sagittal image including the endometrium can be extracted from the three-dimensional volume data according to the position information of the endometrium; the preset drawing frame is enabled, and based on the preset drawing frame, Adjust the processing so that the preset drawing frame covers the endometrium on the sagittal section image; perform image drawing on the target three-dimensional volume data corresponding to the preset drawing frame to obtain a three-dimensional endometrial image, in which the target three-dimensional volume The data is included in the three-dimensional volume data of the uterus area.
  • a preset drawing frame is started, that is, the sagittal view of the display of the ultrasound imaging device
  • the preset drawing frame is displayed on the face cut image, and adjustment processing is performed based on the preset drawing frame so that the preset drawing frame covers the endometrium on the sagittal cut image, and then the area within the preset drawing frame is automatically selected
  • the target three-dimensional volume data in the corresponding three-dimensional volume data is drawn in a VR image.
  • the acquisition of VR images needs to adjust the orientation of the three-dimensional volume data (including endometrial volume data), or set the size and position of the VOI frame so that the preset drawing frame just covers the uterus on the sagittal slice image Endometrial purpose.
  • the user's main concern is the endometrium. Therefore, after detecting key anatomical structures such as the endometrium, the position and size of the three-dimensional volume data can be automatically adjusted according to the position information of the endometrium, making the VOI box It can just wrap the endometrial area.
  • the size and position of the preset drawing frame can be adjusted so that the preset drawing frame covers the endometrium on the sagittal section image; You can also adjust the orientation of the three-dimensional volume data of the uterus area according to the orientation of the preset drawing frame on the sagittal slice image, so that the preset drawing frame covers the endometrium on the sagittal slice image. Is not limited in the embodiments of the present invention.
  • the ultrasound imaging device can determine the size and position of the endometrium on the sagittal image based on the position information of the endometrium, and adjust the size and position of the preset drawing frame accordingly; and/or, according to the endometrium Position information, determine the position of the endometrium in the three-dimensional volume data of the uterine area, and adjust the position of the three-dimensional volume data of the uterine area according to the preset drawing frame on the sagittal slice image.
  • the preset drawing frame is a VOI (Volume of Interest) frame.
  • VOI Volume of Interest
  • VR imaging renders the area within the preset drawing frame to automatically form an image.
  • VOI can also turn a plane in a cuboid into a curved surface, and the remaining 5 faces are still the 5 faces of the cuboid.
  • the curved surface can be used to observe the curved tissue structure.
  • the purpose of setting the VOI frame is to only render the area inside the VOI frame when the volume data is stereoscopically rendered, and the area outside the VOI frame is not rendered, that is, the user can only see the image of the tissue imaging in the VOI frame through the VR image.
  • the curved surface of the VOI frame coincides with the curved lower edge of the endometrium as much as possible, so that an endometrial coronal image can be rendered.
  • the preset drawing frame 1 (VOI frame) can be enabled,
  • the preset drawing frame 1 covers the endometrium, and the curved surface of the VOI imaging coincides with the lower edge of the endometrium as much as possible, so that the structure in the preset drawing frame 1 is automatically VR imaged, as shown in Figure 5 Coronal image of the endometrium.
  • the coronal plane information of the endometrium can be displayed using CMPR in addition to displaying the VR image through three-dimensional reconstruction.
  • CMPR imaging takes a trajectory curve from a slice image of three-dimensional volume data, and the trajectory curve cuts the three-dimensional volume data to obtain a curvilinear cross-sectional image, which can be used to observe curved tissue structures. Since the shape of the endometrium usually has a curved trajectory of a certain arc, a plane in the three-dimensional volume data cannot directly display the coronal information of the endometrium. The CMPR section can well track the entire endometrium Cover to obtain a complete coronal image.
  • a certain slice image may be a sagittal slice image or other slice images, which is not limited in the embodiment of the invention.
  • the ultrasound imaging device extracts the sagittal section image including the endometrium from the three-dimensional volume data according to the position information of the endometrium, and automatically generates the endometrial trajectory line on the sagittal section image; According to the trajectory, the three-dimensional volume data is subjected to endometrial surface imaging to obtain an endometrial image.
  • the trajectory line is a curve.
  • the ultrasound imaging device after automatically identifying and obtaining the position information of the endometrium, automatically generates a CMPR trajectory line sufficient to fit the endometrium on the sagittal slice image according to the position information of the endometrium , To achieve automated endometrial CMPR imaging.
  • the endometrium may have a certain degree of twist in the acquired three-dimensional volume data of the uterus area.
  • the three-dimensional volume data needs to be adjusted in orientation so that the sagittal slice image can be as many as possible
  • the endometrium can obtain an approximately elliptical image on the cross-section.
  • the preset cross-sectional position of the endometrium on the cross-section can be the horizontal position shown in the figure. This horizontal position can be indicated by the dotted line in FIG. 10, for example. The horizontal line shown.
  • the image of the endometrium on the cross-section will rotate at a certain angle, and the long axis of the elliptical image will no longer be the horizontal line in the figure, but will be inclined at a certain angle.
  • the solid white line represents the long axis of the cross-sectional image of the endometrium, which is not aligned with the horizontal line shown by the broken line, indicating that the endometrium has been twisted at a certain angle.
  • the present invention can adjust the orientation of the three-dimensional volume data of the uterine region.
  • the position information of the endometrium may include: the position of the endometrium on the sagittal plane and the position of the endometrium on the transverse plane. Then, the process of the ultrasound imaging device automatically generating the endometrial trajectory on the sagittal section image can be: adjust the orientation of the three-dimensional volume data until the position of the endometrium on the cross section is adjusted to meet the preset cross section
  • the position, for example, the preset cross-sectional position may be the horizontal position shown in FIG.
  • the position of the endometrium on the sagittal plane is determined, and then the endometrium may be located on the sagittal plane
  • the position on the plane is automatically fitted to the trajectory of the endometrium on the sagittal slice image.
  • the ultrasound imaging device obtains the position of the endometrium on the sagittal plane and the cross-sectional plane respectively, as the sagittal plane position information and the cross-sectional plane position information, and then the endometrial position (cross-sectional plane position) Information) Rotate the endometrial position of the transverse plane to a horizontal state. This rotation operation also adjusts the position of the endometrium on the sagittal plane, and then adjusts the uterus according to the adjusted sagittal plane.
  • the endometrial position (sagittal plane position information) fits a CMPR curve, which just passes through the central area of the endometrium. At this time, the CMPR image of the endometrium is obtained by imaging based on the CMPR curve.
  • the SCV slice contrast view
  • the SCV increases the thickness adjustment and renders the area within the thickness range. Improve the contrast resolution and signal-to-noise ratio of the image.
  • the ultrasound imaging device extracts the sagittal section image including the endometrium from the three-dimensional volume data according to the position information of the endometrium, and automatically generates the uterus on the sagittal section image Endometrium trajectory; according to the endometrium position information, obtain the edge information of the endometrium on the sagittal slice image; based on the edge information and the trajectory line, determine the image drawing area; for the three-dimensional target corresponding to the image drawing area
  • the volume data is used to image the curved surface of the endometrium to obtain an endometrial image that reflects the thickness of the endometrium.
  • the ultrasound imaging device extracts the sagittal section image including the endometrium from the three-dimensional volume data according to the position information of the endometrium, and automatically generates the endometrial trajectory on the sagittal section image
  • the line is a separate curve and cannot represent the thickness of the endometrium.
  • the edge information of the endometrium on the sagittal slice image can be obtained according to the position information of the endometrium.
  • the trajectory line and the edge information The area with a certain thickness in between is determined as the image drawing area, and only the target three-dimensional volume data corresponding to the image drawing area needs to be imaged on the curved surface of the endometrium to obtain an endometrial image that reflects the thickness of the endometrium. It is an endometrial image with endometrial thickness, which improves the resolution of the image.
  • the ultrasound imaging device extracts the sagittal slice image 1 including the endometrium from the three-dimensional volume data according to the position information of the endometrium, and Automatically generate the trajectory line 2 of the endometrium; obtain the edge information 3 of the endometrium on the sagittal image 1 according to the position information of the endometrium; determine the image drawing area 4 based on the edge information 3 and the trajectory line 2; The target three-dimensional volume data corresponding to the image drawing area 4 is subjected to curved surface imaging of the endometrium to obtain an endometrial image reflecting the thickness of the endometrium (as shown in FIG. 7).
  • the ultrasound imaging device can also obtain an endometrial cut image through two-dimensional imaging. Based on the position information of the endometrium detected in the three-dimensional volume data, the standard section of the endometrium can also be obtained directly by planar imaging.
  • the ultrasound imaging device fits the endometrial coronal plane according to the position information of the endometrium; obtains the grayscale image corresponding to the endometrial coronal plane from the three-dimensional volume data; the grayscale image serves as the standard of the endometrium Cutaway image.
  • the standard image here is the coronal image of the endometrium.
  • the endometrium is usually a curved structure, and VR imaging or CMPR can better express the curved structure. But as an approximation, the plane of the endometrial coronal plane can also be displayed directly.
  • the ultrasound imaging device detects the position information of the endometrium in the three-dimensional volume data, it can fit the coronal plane of the endometrium, so that the plane passes through the endometrial area and can maximize the display of the endometrium (
  • the endometrium is a sheet-like object with a certain thickness
  • the coronal plane is the central plane of the sheet-like object).
  • the equation of the plane can be obtained by solving the equation or fitting the least squares estimate.
  • the grayscale image corresponding to the plane can be taken from the three-dimensional volume data to obtain the standard endometrial section.
  • the angle deviation can be rotated and corrected, and finally a cut image (two-dimensional plane) of the endometrium can be obtained.
  • the above imaging methods can generate endometrial images, which can be used independently or in combination.
  • the embodiments of the present invention are not limited.
  • the user can also use the keyboard, mouse and other tools to detect the VOI area or CMPR curve in the detected cut plane Modification operations such as moving, zooming, deleting and recalibrating are implemented to realize semi-automatic VOI imaging or CMPR curved surface imaging; for standard endometrial plane imaging of endometrium, the user can also adjust the incision through the knob, which is not limited in this embodiment of the present invention.
  • the ultrasound imaging device After the ultrasound imaging device acquires the endometrial image, the ultrasound imaging device displays the endometrial image on the display, and these endometrial images are stored in the memory.
  • a three-dimensional rendering algorithm such as ray tracing is used to obtain an endometrial VR image, and display it on the display.
  • CMPR image of the endometrium is obtained and displayed on the display.
  • the ultrasound imaging device automatically obtains a standard endometrial image based on the standard endometrial image.
  • a certain workflow may be set up to integrate functions corresponding to different imaging modes into the workflow for the doctor to freely select, and display the image corresponding to the selected function on the display.
  • the ultrasound imaging device obtains three-dimensional volume data of the uterus area through ultrasound, and detects key anatomical structures, specifically: performing feature recognition based on the three-dimensional volume data, identifying the key anatomical structure (endometrium), That is, a region of interest, or a cross-sectional image based on three-dimensional volume data, to identify a key anatomical structure, namely a region of interest.
  • endometrium identifying the key anatomical structure
  • endometrial CMPR automatic imaging a standard endometrial automatic imaging of the endometrium to obtain an endometrial image (endometrial automatic imaging)
  • display the imaging results For example, displaying VR rendering imaging results, displaying CMPR imaging results, or displaying standard cut planes of the endometrium.
  • the ultrasound imaging device can identify the endometrium through the three-dimensional volume data of the uterine region of the object to be detected, thereby obtaining the position information of the endometrium, and then automatically imaging to obtain the endometrial section image, so find The position of the endometrium is accurate, which improves the accuracy of ultrasound imaging, and can automatically image and improve the intelligence of ultrasound imaging.
  • an ultrasound imaging method for the region of interest is provided. As shown in FIG. 9, the method may include:
  • S201 Perform ultrasonic scanning on the object to be detected to obtain three-dimensional volume data of the object to be detected.
  • S203 Process the three-dimensional volume data according to the location information of the region of interest to obtain an image of the region of interest.
  • the ultrasound imaging device recognizes the region of interest from the three-dimensional volume data of the object to be detected according to the image characteristics of the region of interest, and obtains the location information of the region of interest, including the following ways:
  • the preset positioning model Based on the preset positioning model, process the three-dimensional volume data, identify the region of interest in the object to be detected, and locate the location information of the region of interest; the preset positioning model characterizes the three-dimensional volume data and the region of interest Correspondence.
  • the three-dimensional volume data is processed to identify the region of interest in the object to be detected, and before the location information of the region of interest is located, the preset positioning needs to be obtained first model.
  • the preset positioning model can be constructed in advance, and the preset positioning model that has been constructed is called during the imaging process.
  • the process of constructing the preset positioning model may include: acquiring the three-dimensional training volume data and the region of interest of at least two objects to be trained; based on the three-dimensional training volume data and the region of interest, a preset machine learning algorithm is used to train the training model To get the preset positioning model.
  • the ultrasound imaging device processes the three-dimensional volume data according to the position information of the region of interest to obtain a slice image of the region of interest, including the following types:
  • (1) Obtaining the preset drawing frame; covering the target drawing area corresponding to the position information of the area of interest with the preset drawing frame; performing image drawing on the target three-dimensional volume data corresponding to the preset drawing frame to obtain the three-dimensional area of interest
  • the image and target 3D volume data are included in the 3D volume data.
  • the coronal plane of the region of interest is fitted; the grayscale image corresponding to the coronal plane of the region of interest is obtained from the three-dimensional volume data; the grayscale image is used as the standard cut plane of the region of interest image.
  • the location information of the region of interest may include: sagittal plane location information and cross-sectional location information; according to the location information of the region of interest, the process of generating the trajectory of the region of interest is: rotating the cross-sectional location information To the same horizontal plane as the sagittal plane position information, obtain the rotational cross-section position information; according to the rotational cross-section position information and the sagittal plane position information, fit the trajectory of the region of interest.
  • an ultrasound imaging device As shown in FIG. 1, the ultrasound imaging device includes:
  • the transmitting circuit 101 is used to excite the probe 100 to transmit ultrasonic waves to the object to be detected;
  • the receiving circuit 103 is configured to receive the ultrasonic echo returned from the sub-object to be detected through the probe 100, so as to obtain an ultrasonic echo signal/data;
  • the beam synthesis circuit 104 is used to perform beam synthesis processing on the ultrasonic echo signal/data to obtain the ultrasonic echo signal/data after beam synthesis;
  • the processor 105 is configured to process the ultrasonic echo signal to obtain three-dimensional volume data of the uterine area of the object to be detected; according to the image characteristics of the endometrium of the uterine area, from the three-dimensional volume data of the uterine area Identify the endometrium in and obtain the position information of the endometrium; based on the position information of the endometrium, perform endometrial imaging based on the three-dimensional volume data to obtain an endometrial image;
  • the display 106 is used to display the endometrial image.
  • the processor 105 may be used to determine the image characteristics of the endometrium and uterine basal tissue of the uterine area, and/or the cycle of the endometrium according to the uterine area.
  • the morphological characteristics of sexual change identify the endometrium from the three-dimensional volume data of the uterine area, and obtain position information of the endometrium.
  • the processor 105 may be used to perform preset feature extraction on the three-dimensional volume data of the uterine region to obtain at least one candidate region of interest; to obtain the identified endometrium
  • the processor 105 may be further configured to extract a feature index of the at least one candidate region of interest, where the feature index includes shape features, texture features, boundary features, or grayscale distribution features; Calculate the correlation between the at least one candidate region of interest and the preset template region based on the feature index; and use the candidate region of interest with the highest correlation and a correlation exceeding a preset threshold as the candidate The target area of the endometrium of the subject is detected.
  • the processor 105 may be used to perform image segmentation on the three-dimensional volume data of the uterine region, and perform morphological operations on the image segmentation results to obtain the at least one with a complete boundary Candidate region of interest.
  • the processor 105 may be used to obtain a preset positioning model, where the preset positioning model includes three-dimensional positive sample data of the uterine region that has identified the endometrium, and intrauterine Calibration information of the membrane in the three-dimensional positive sample data; and, based on the calibration information of the endometrium in the preset positioning model, the endometrium is identified from the three-dimensional volume data of the uterine region of the object to be detected To locate the position information of the endometrium.
  • the processor 105 may also be used to obtain endometrial image feature laws through deep learning or machine learning methods using calibration information of the endometrium in the preset positioning model ; Based on the image feature law of the endometrium, extract the target area containing the endometrium from the three-dimensional volume data of the uterine area of the object to be detected, and output the position information of the target area in the three-dimensional volume data, As the position information of the endometrium.
  • the processor 105 may be further configured to acquire three-dimensional training volume data of at least two subjects to be trained, the three-dimensional training volume data including at least the identified endometrium uterine area Three-dimensional positive sample data; calibrate the endometrium or the associated anatomy of the endometrium in the three-dimensional training volume data as calibration information of the endometrium in the three-dimensional training volume data; and, based on the The three-dimensional training volume data and the calibration information of the endometrium are trained by using machine learning or deep learning methods to obtain the preset positioning model.
  • the processor 105 may be used to obtain sagittal image data identifying the endometrium from the three-dimensional volume data of the uterine region; according to the sagittal image Data to determine the center point of the endometrium; based on the center point, acquiring cross-sectional image data orthogonal to the sagittal image data and identifying the endometrium included; based on The position of the cross-sectional image data and the sagittal plane image data of the endometrium in the three-dimensional volume data of the uterine region to obtain position information of the endometrium.
  • the processor 105 may be used to extract a sagittal slice image including the endometrium from the three-dimensional volume data according to the position information of the endometrium; enable and Adjusting the preset drawing frame so that the preset drawing frame covers the endometrium on the sagittal image; and, drawing the target three-dimensional volume data corresponding to the preset drawing frame to obtain a three-dimensional intrauterine
  • the target three-dimensional volume data is included in the three-dimensional volume data of the uterine region.
  • the processor 105 may also be used to determine the size and position of the endometrium on the sagittal image according to the position information of the endometrium, and adjust the preset accordingly Drawing the size and position of the frame; and/or, determining the position of the endometrium in the three-dimensional volume data of the uterine area according to the position information of the endometrium, and according to the preset drawing frame in the The orientation on the sagittal slice image adjusts the orientation of the three-dimensional volume data of the uterine area.
  • the processor 105 may be used to extract a sagittal slice image including the endometrium from the three-dimensional volume data according to the position information of the endometrium, and The trajectory of the endometrium is automatically generated on the sagittal plane image; and, according to the trajectory, the three-dimensional volume data is subjected to endometrial curved surface imaging to obtain the endometrial image.
  • the position information of the endometrium includes: sagittal position information and cross-sectional position information;
  • the processor 105 can also be used to adjust the orientation of the three-dimensional volume data until the position of the endometrium on the cross-sectional plane is adjusted to meet the preset cross-sectional plane position, for example, the preset cross-sectional plane position may be as shown in FIG. 10 Horizontal position; based on the adjusted three-dimensional volume data, then determine the position of the endometrium on the sagittal plane, and then automatically fit on the sagittal slice image according to the position of the endometrium on the sagittal plane The trajectory of the endometrium.
  • the processor 105 may also be used to obtain the endometrial trajectory line on the sagittal plane image automatically and obtain it according to the position information of the endometrium
  • the curved surface of the membrane is imaged to obtain a three-dimensional endometrial image reflecting the thickness of the endometrium.
  • the processor 105 may be used to fit the coronal plane of the endometrium according to the position information of the endometrium; obtain the endometrium from the three-dimensional volume data A grayscale image corresponding to the coronal plane; the grayscale image serves as a standard cut plane image of the endometrium.
  • the ultrasound imaging device can identify the endometrium through the three-dimensional volume data of the uterine area of the object to be detected, thereby obtaining the position information of the endometrium, eliminating the need for the user to manually perform endometrial positioning The tedious operation of the machine is convenient for the user to quickly identify the endometrium and improve the overall work efficiency.
  • the ultrasound imaging device can also automatically image the endometrium image according to the position information of the endometrium. In view of the accuracy of the automatically recognized endometrium, the accuracy of ultrasound imaging is improved, and the automatic imaging also improves the ultrasound image. The intelligence of imaging.
  • An embodiment of the present invention provides a computer-readable storage medium that stores an ultrasound imaging program, and the ultrasound imaging program may be executed by a processor to implement the above-mentioned ultrasound imaging method.
  • the computer-readable storage medium may be volatile memory (volatile memory), such as random access memory (Random-Access Memory, RAM); or non-volatile memory (non-volatile memory), such as read-only memory (Read-Only Memory, ROM), flash memory (flash memory), hard disk (Hard Disk Drive), or solid-state hard disk (Solid-State Drive, SSD); it can also be one of the above memories or any combination of each Devices such as mobile phones, computers, tablet devices, personal digital assistants, etc.
  • volatile memory such as random access memory (Random-Access Memory, RAM)
  • non-volatile memory such as read-only memory (Read-Only Memory, ROM), flash memory (flash memory), hard disk (Hard Disk Drive), or solid-state hard disk (Solid-State Drive, SSD
  • ROM read-only memory
  • flash memory flash memory
  • Hard Disk Drive Hard Disk Drive
  • SSD solid-state hard disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Acoustics & Sound (AREA)
  • Vascular Medicine (AREA)
  • Physiology (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种超声成像方法及设备(10)、计算机可读存储介质,超声成像方法包括:发射超声波至待检测对象的子宫区域(S201);接收从待检测对象的子宫区域返回的基于超声波的超声回波,并基于超声回波获取超声回波信号(S202);对超声回波信号进行处理,得到待检测对象的子宫区域的三维体数据(S203);根据子宫区域的子宫内膜的图像特征,从子宫区域的三维体数据中识别出子宫内膜,得到子宫内膜的位置信息(S204);根据子宫内膜的位置信息,基于三维体数据进行子宫内膜切面成像,得到子宫内膜切面图像;以及,显示子宫内膜切面图像。

Description

一种超声成像方法及设备 技术领域
本发明实施例涉及超声成像技术领域,尤其涉及一种超声成像方法及设备、计算机可读存储介质。
背景技术
现代医学影像检查中,超声技术因其高可靠性、快速便捷、实时成像以及可重复检查等优点,已经成为应用最广、使用频率最高、普及应用最快的检查手段。尤其是基于人工智能辅助技术的发展,进一步推动了超声技术在临床诊疗中的应用。
妇科超声检查是超声诊断中相对重要并且广泛应用的领域之一。其中,子宫及其附件的超声检查可以为很多妇科疾病的诊断和治疗提供重要指导。由于三维超声可以呈现子宫的冠状切面声像图,清晰显示子宫内膜是否发生病变及形态是否完整,因此,采用三维超声技术实现子宫相关妇科疾病的诊断具有重要意义。
虽然三维超声技术具有上述优势,但是由于三维容积图像坐标轴容易混乱,加上子宫的各种方位变化及三维空间比较抽象等原因,医生在手动进行子宫部位的寻找和确定标准的子宫内膜切面图像时,可能需要反复旋转三维容积图像,逐个切面寻找标准的子宫内膜切面。该手动定位的过程不仅费时费力,而且成像的智能性和准确率也有限。
发明内容
本发明实施例提供了一种超声成像方法,所述方法包括:
发射超声波至待检测对象的子宫区域进行体扫描;
接收从所述待检测对象的子宫区域返回的超声回波,并基于所述超 声回波获取超声回波信号;
对所述超声回波信号进行处理,得到所述待检测对象的子宫区域的三维体数据;
根据子宫区域的子宫内膜的图像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息;
根据所述子宫内膜的位置信息,基于所述三维体数据进行子宫内膜成像,得到子宫内膜图像;以及,
显示所述子宫内膜图像。
本发明实施例还提供了一种超声成像方法,包括:
对待检测对象进行超声体扫描,得到所述待检测对象的三维体数据;
根据所述待检测对象中感兴趣区域的图像特征,从所述待检测对象的三维体数据中识别出感兴趣区域,得到所述感兴趣区域的位置信息;
根据所述感兴趣区域的位置信息,对所述三维体数据进行处理,得到感兴趣区域图像;
显示所述感兴趣区域图像。
本发明实施例提供了一种超声成像设备,所述超声成像设备包括:
探头;
发射电路,用于激励所述探头向待检测对象发射超声波以进行体扫描;
发射/接收选择开关;
接收电路,用于通过所述探头接收从所述待检测对象返回的超声回波,从而获得超声回波信号/数据;
波束合成电路,用于对所述超声回波信号/数据进行波束合成处理,获得波束合成后的超声回波信号/数据;
处理器,用于对所述波束合成后的超声回波信号进行处理,得到所述待检测对象的子宫区域的三维体数据;根据子宫区域的子宫内膜的图 像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息;根据所述子宫内膜的位置信息,基于所述三维体数据进行子宫内膜成像,得到子宫内膜图像;
显示器,用于显示所述子宫内膜图像。
本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有超声成像程序,所述超声成像程序可以被处理器执行,以实现上述的超声成像方法。
本发明实施例提供了一种超声成像方法及设备、计算机可读存储介质,采用上述技术实现方案,超声成像设备可以根据子宫内膜的图像特征自动得到子宫内膜的位置信息,省去了需要用户不断手动进行子宫内膜定位的繁琐操作,便于用户快速识别子宫内膜,提高整体工作效率;超声成像设备还可以根据
子宫内膜的位置信息自动成像得到子宫内膜图像,鉴于自动识别的子宫内膜的位置是准确的,提高了后续超声成像的准确度,并且可以自动成像还提高了超声波图像成像的智能性。
附图说明
图1为本发明实施例提供的超声成像设备的结构框图示意图;
图2为本发明实施例提供的一种超声成像方法的流程图一;
图3为本发明实施例提供的示例性的超声成像流程框图一;
图4为本发明实施例提供的示例性的VOI框示意图;
图5为本发明实施例提供的示例性的VR成像结果;
图6为本发明实施例提供的示例性的CMPR成像过程示意图;
图7为本发明实施例提供的示例性的CMPR成像结果;
图8为本发明实施例提供的示例性的超声成像流程框图二;
图9为本发明实施例提供的一种超声成像方法的流程图二;
图10为本发明实施例提供的子宫内膜的横切面图像的示意图。
具体实施方式
为了能够更加详尽地了解本发明实施例的特点与技术内容,下面结合附图对本发明实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本发明实施例。
图1为本发明实施例中的超声成像设备的结构框图示意图。超声成像设备10可以包括探头100、发射电路101、发射/接收选择开关102、接收电路103、波束合成电路104、处理器105和显示器106。发射电路101可以激励探头100向目标组织发射超声波;接收电路103可以通过探头100接收从待检测对象返回的超声回波,从而获得超声回波信号/数据;该超声回波信号/数据经过波束合成电路104进行波束合成处理后,送入处理器105。处理器105对该超声回波信号/数据进行处理,以获得待检测对象的超声图像。处理器105获得的超声图像可以存储于存储器107中。这些超声图像可以在显示器106上显示。
本发明的一个实施例中,前述的超声成像设备10的显示器106可为触摸显示屏、液晶显示屏等,也可以是独立于超声成像设备10之外的液晶显示器、电视机等独立显示设备,也可为手机、平板电脑等电子设备上的显示屏,等等。
实际应用中,处理器105可以为特定用途集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理装置(Digital Signal Processing Device,DSPD)、可编程逻辑装置(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器中的至少一种,,从而使得该处理器105可以执行本发明的各个实施例中的超声成像方法的相应步骤。
存储器107可以是易失性存储器(volatile memory),例如随机存取存储器(Random Access Memory,RAM);或者非易失性存储器(non-volatile memory),例如只读存储器(Read Only Memory,ROM),快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);或者以上种类的存储器的组合,并向处理器提供指令和数据。
以下基于上述超声成像设备10,对本发明的技术方案进行详细说明。
本发明实施例提供了一种超声成像方法,如图2所示,该方法可以包括:
S101、发射超声波至待检测对象的子宫区域,以进行体扫描。
在本发明实施例中,超声成像设备可以通过探头发射超声波至待检测对象的子宫区域,实现对子宫区域的超声扫描和检查,用于对子宫区域进行检测的场景下。
需要说明的是,待检测对象可以为人体器官或人体组织结构等包含子宫区域的对象,这里的子宫区域为包含全部或部分子宫、或包含全部或部分子宫和子宫附件的区域。
在本发明实施例中,超声成像设备可以通过对子宫区域的关键解剖结构进行识别,通过关键解剖结构的位置表征子宫区域。这里子宫区域的关键解剖结构可以为子宫内膜。因此,本发明实施例通过识别出子宫内膜的位置,表征子宫区域的超声图像。
S102、接收从待检测对象的子宫区域返回的超声回波,并基于超声回波获取超声回波信号。
S103、对超声回波信号进行处理,得到待检测对象的子宫区域的三维体数据。
超声成像设备的接收电路可以通过探头接收从待检测对象的子宫区域返回的超声回波,从而获得超声回波信号/数据;该超声回波信号/数据经过波束合成电路进行波束合成处理后,送入处理器。超声成像设备的处理器 对该超声回波信号/数据进行信号处理和三维重建,以获得待检测对象的子宫区域的三维体数据。
需要说明的是,如图3所示,发射电路将一组经过延迟聚焦的脉冲发送到探头,探头向待检测对象的机体组织发射超声波,经过一定延时后接收从待检测对象的机体组织反射回来的带有组织信息的超声回波,并将此超声回波重新转换为电信号,接收电路接收该电信号(超声回波信号),并将此超声回波信号送入波束合成电路,超声回波信号在波束合成电路完成聚焦延时、加权和通道求和,再经过信号处理模块(即处理器)进行信号处理,然后将处理后的信号送入三维重建模块(即处理器),经过图像绘制渲染后处理,得到可视化信息超声波图像,然后传输到显示器显示超声波图像。
S104、根据子宫区域的子宫内膜的图像特征,从子宫区域的三维体数据中识别出子宫内膜,得到子宫内膜的位置信息。
在本发明实施例中,超声成像设备在得到了待检测对象的子宫区域的三维体数据之后,就可以根据子宫区域的子宫内膜的图像特征,对子宫区域的三维体数据进行特征提取、特征对比,从而识别出子宫内膜,进而得到子宫内膜的位置信息。
需要说明的是,在进行子宫内膜的三维重建之前,超声成像设备需要识别哪些解剖结构和待确定的子宫内膜相关。例如,在子宫区域的体数据中,子宫内膜的回声和周围组织的回声存在明显的差异,同时随着女性生理周期的变化,子宫内膜的形态也呈现周期性变化,特征比较明显,所以可以将子宫内膜作为子宫区域的关键解剖结构,确定子宫内膜切面。在本发明实施例中,子宫区域的关键解剖结构的检测包括但并不仅限于子宫内膜。
在本发明的一些实施例中,子宫内膜与子宫基层组织对超声波的反射能力不同,对应得到的超声回波信号的灰度特征存在差异,因此超声成像 设备可以根据子宫区域的子宫内膜与子宫基层组织的图像特征的差异,从子宫区域的三维体数据中识别出子宫内膜。超声成像设备可以根据灰度值的差异,确定子宫内膜与子宫基层组织的边界,从而在三维体数据中识别出子宫内膜。在本发明的一些实施例中,随着女性生理周期的变化,子宫内膜的形态也呈现周期性变化,因此超声成像设备可以根据子宫区域的子宫内膜的可周期性变化的形态特征,从子宫区域的三维体数据中识别出子宫内膜,得到子宫内膜的位置信息。超声成像设备可基于子宫内膜在生理周期不同时期的形态特征,从子宫区域的三维体数据中识别出子宫内膜。下面将具体进行介绍。
需要说明的是,子宫内膜等关键解剖结构的识别方法可以是手动的,也可以是自动的。手动获取解剖结构时用户可以通过键盘、鼠标等工具,通过一定的工作流在三维体数据中特定的解剖结构上点点、画线等,来告知关键解剖结构的类型和位置。在本发明实施例中,采用自动识别子宫内膜的方式,自动识别子宫内膜是指通过提取三维体数据的特征,利用该特征自动检测出子宫内膜在三维体数据中的位置。
在本发明实施例中,自动识别关键解剖结构的方法分为两种情况:一种是直接在三维体数据中确定子宫内膜的空间位置;另一种是在三维体数据的切面中检测子宫内膜,根据切面位置在三维体数据中的位置以及子宫内膜在切面中的位置,确定子宫内膜在三维体数据中的位置。其中,子宫内膜等关键解剖结构位置的表达方式可以是用一个感兴趣(ROI,region of interest)框把解剖位置包住,也可以是精确分割出解剖结构的边界,还可以用一个或多个点辅助表达,自动识别三维体数据中子宫内膜这个关键解剖结构的方法有很多,本发明实施例不作限制。
示例性的,在三维体数据中确定子宫内膜的空间位置,从而获取到最标准的子宫内膜切面的过程可以基于灰度和/或形态学等特征检测方法,实现对子宫内膜的检测;也可以采用机器学习或深度学习的方法在三维体数 据中检测或精确分割出子宫内膜,本发明实施例不作限制。
在本发明的一些实施例中,超声成像设备根据子宫区域的子宫内膜的图像特征,从子宫区域的三维体数据中识别出子宫内膜,得到子宫内膜的位置信息的实现方式可以包括以下几种,本发明实施例不作限制。
在本发明的一个实施例中,超声成像设备对子宫区域的三维体数据进行预设特征提取,得到至少一个候选感兴趣区域;获取已识别出子宫内膜的子宫区域的三维模板数据,根据该三维模板数据,获得子宫内膜的预设模板区域;将至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的候选感兴趣区域作为待检测对象的子宫内膜的目标区域,并根据子宫内膜的目标区域在三维体数据中的位置,得到子宫内膜的位置信息。
这里,预设特征可以为形态学特征,超声成像设备对子宫区域的三维体数据进行二值化分割,并对二值化分割结果进行形态学操作处理,从而得到具有完整边界的至少一个候选感兴趣区域。这里的形态学操作例如可以是对二值化分割结果进行膨胀处理或腐蚀处理。膨胀处理可以一定程度地扩大二值化分割结果的边缘。腐蚀处理可以将二值化分割结果缩小。
在本发明实施例中,由于在子宫区域的体数据中,子宫内膜的回声和周围组织的回声存在明显的差异,同时随着女性生理周期的变化,子宫内膜的形态也呈现周期性变化,特征比较明显,因此,可以采用灰度和/或形态学等特征检测方法,实现对子宫内膜的检测。
在本发明的一些实施例中,超声成像设备将至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的候选感兴趣区域作为待检测对象的子宫内膜的目标区域的具体实现可以为:提取至少一个候选感兴趣区域的特征指数,特征指数包括形状特征、纹理特征、边界特征或灰度分布特征;基于特征指数,计算至少一个候选感兴趣区域与预设模板区域 的相关度;以及,将相关度最高且相关度超过预设阈值的候选感兴趣区域作为待检测对象的子宫内膜的目标区域。
需要说明的是,基于特征指数,计算至少一个候选感兴趣区域与预设模板区域的相关度的方式本发明实施例不作限制,可以为特征匹配,可以为特征的差异度等。
在本发明实施例中,预设阈值可以为90%,具体的本发明实施例不作限制。
示例性的,对三维体数据进行二值化分割,进行一些必要的形态学操作后得到至少一个候选感兴趣区域,然后对每个候选感兴趣区域根据形状特征判断该候选感兴趣区域是子宫内膜的概率,选择一个概率最高的区域作为目标区域(即匹配度最高的)。具体的,超声成像设备可以事先获取已识别出子宫内膜的子宫区域的三维模板数据,根据该三维模板数据,获得子宫内膜的预设模板区域,再将至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的候选感兴趣区域作为待检测对象的子宫内膜的目标区域。
也就是说,超声成像设备对三维体数据进行形状特征提取,从子宫区域中得到不同形状特征的至少一个候选感兴趣区域;将至少一个候选感兴趣区域对应的形状特征与预设模板区域的形状特征进行对比,得到至少一个对比结果;至少一个对比结果与至少一个候选感兴趣区域一一对应;将至少一个对比结果中最高的对比结果对应的候选感兴趣区域,识别为子宫内膜(即目标区域);从所述三维超声图像数据中,获取子宫内膜的位置信息(即目标区域在三维体数据中的位置)。
在本发明实施例中,超声成像设备也可以采用其他灰度检测和分割方法,例如大津阈值(OTSU)、水平集(LevelSet)、图割(Graph Cut)、Snake等实现对子宫内膜的目标区域的分割,本发明实施例不作限制。
在本发明的一个实施例中,可以基于机器学习或深度学习方法实现对 子宫内膜的检测。采用机器学习或深度学习方法时,先通过一系列训练样本对超声成像设备进行训练,建立预设定位模型,然后基于训练学习到的特征,对子宫区域的三维体数据进行分类和回归,得到子宫内膜在三维体数据中的位置信息。
超声成像设备获取预设定位模型,预设定位模型包括已识别出子宫内膜的子宫区域的三维正样本数据、以及子宫内膜在该三维正样本数据中的标定信息;基于预设定位模型中子宫内膜的标定信息,从待检测对象的子宫区域的三维体数据中识别出子宫内膜,定位出子宫内膜的位置信息。
在本发明实施例中,一种目标区域的定位和识别的方法可以是采用机器学习或深度学习的方法在三维体数据中检测或精确分割出关键解剖结构(例如,子宫内膜)。例如,可首先学习数据库中区别目标区域(正样本:子宫内膜区域)和非目标区域(负样本:背景区域)的特征或规律,再根据学习到的特征或规律对其他图像的关键解剖结构进行定位和识别。
可以理解的是,这里采用正样本和负样本对预设定位模型进行训练,可以得到更全面和准确的模型,提高识别的准确率。
需要说明的是,在本发明实施例中,预设定位模型包括已识别出子宫内膜的子宫区域的三维正样本数据、以及子宫内膜在该三维正样本数据中的标定信息,预设定位模型是采用机器学习或深度学习的方法进行模型训练得到的。这里的三维正样本数据就是指包含有子宫内膜的特征体数据。
在本发明的一些实施例中,超声成像设备通过模型训练得到预设定位模型的过程为:超声成像设备获取至少两个待训练对象的三维训练体数据,三维训练体数据至少包括已识别出子宫内膜的子宫区域的三维正样本数据;在三维训练体数据中标定出子宫内膜或子宫内膜的关联解剖结构,作为子宫内膜在该三维训练体数据中的标定信息;以及,基于三维训练体数据和子宫内膜的标定信息,采用机器学习或深度学习的方法进行模型训练,得到预设定位模型。
其中,预设定位模型表征三维体数据分别与标定信息的对应关系。
在本发明实施例中,三维训练体数据和子宫内膜的标定信息(即数据库)为多份子宫内膜体数据及关键解剖结构的标定结果。其中,标定结果可以根据实际的任务需要进行设定,可以是包含目标的感兴趣区域(region of interest,ROI)框,也可以是对子宫内膜区域进行精确分割的掩膜,本发明实施例不做限定。
在本发明的一些实施例中,超声成像设备利用预设定位模型中子宫内膜的标定信息,通过深度学习或机器学习的方法学习得到子宫内膜的图像特征规律;基于子宫内膜的图像特征规律,从待检测对象的子宫区域的三维体数据中提取出含子宫内膜的目标区域,并输出该目标区域在三维体数据中的位置信息,作为子宫内膜的位置信息。
也就是说,超声成像设备识别子宫内膜可以分为两个步骤:1、获取数据库,该数据库中包含了多个三维训练体数据及对应的子宫内膜的标定结果,其中,子宫内膜的标定结果可以根据实际的任务需要进行设定,可以是包含子宫内膜的ROI(感兴趣区域)框,也可是对子宫内膜进行精确分割的Mask(掩膜);2、定位和识别,即利用机器学习算法学习数据库中可以区别子宫内膜的目标区域和非子宫内膜区域的特征或者规律来实现对超声图像的感兴趣区域的识别和定位。
可选的,深度学习或机器学习的方法包括:基于滑窗的方法、基于深度学习的Bounding-Box方法、基于深度学习的端到端的语义分割网络方法和采用上述方法标定子宫内膜的目标区域,并根据标定结果设计分类器对感兴趣区域进行分类判断,具体的根据实际情况进行选择,本申请实施例不做具体的限定。
例如,基于滑窗的方法可以为:首先对滑窗内的区域进行特征提取,特征提取方法可以是主成分分析(principal components analysis,PCA)、线性判别分析(Linear Discriminant Analysis,LDA)、Harr特征、纹理特征等, 也可以采用深度神经网络来进行特征提取,然后将提取到的特征和数据库进行匹配,用k最邻近分类算法(k-NearestNeighbor,KNN)、支持向量机(Support Vector Machine,SVM)、随机森林、神经网络等判别器进行分类,确定当前滑窗是否为子宫内膜的目标区域同时获取其相应类别。
例如,基于深度学习的Bounding-Box方法可以为:通过堆叠基层卷积层和全连接层来对构建的数据库进行特征的学习和参数的回归,对于输入的三维体数据,可以通过网络直接回归出对应的子宫内膜的目标区域的Bounding-Box,同时获取其子宫内膜的目标区域内组织结构的类别,常见的网络有区域卷积神经网络(Region-Convolutional Neural Network,R-CNN)、快速区域卷积神经网络(Fast R-CNN)、Faster-RCNN、SSD(single shot multibox detector)、YOLO等。
例如,基于深度学习的端到端的语义分割网络方法可以为:通过堆叠基层卷积层、上采样或者反卷积层中的任一种来对构建的数据库进行特征的学习和参数的回归,对于输入数据,可以通过网络直接回归出对应的子宫内膜的目标区域的Bounding-Box,其中,加入上采样或者反卷积层中的任一种来使得输入与输出的尺寸相同,从而直接得到输入数据的子宫内膜的目标区域及其相应类别,常见的网络有FCN、U-Net、Mask R-CNN等。
例如,也可以采用上述三种方法先标定子宫内膜的目标区域,然后根据标定结果设计分类器对子宫内膜的目标区域进行分类判断中,进行分类判断的方法为:首先对目标ROI或Mask进行特征提取,特征提取方法可以是PCA、LDA、Haar特征、纹理特征等,也可以采用深度神经网络来进行特征提取,然后将提取到的特征和数据库进行匹配,用KNN、SVM、随机森林、神经网络等判别器进行分类。
在本发明的一些实施例中,可先从三维体数据中提取出一系列剖面图像数据,然后基于剖面图像数据检测子宫内膜。
超声成像设备从子宫区域的三维体数据中,获取识别出包括有子宫内 膜的矢状面图像数据;根据矢状面图像数据,确定出子宫内膜的中心点;基于中心点,获取与矢状面图像数据正交的、且识别出包括有子宫内膜的横切面图像数据;基于识别出包括有子宫内膜的横切面图像数据和矢状面图像数据在子宫区域的三维体数据中的位置,得到子宫内膜的位置信息。
需要说明的是,本发明实施例中的三维体数据是对子宫区域进行超声扫描得到的,那么基于三维体数据形成的剖面图像可以存在多个包含了子宫内膜的剖面图像。因此,超声成像设备对三维体数据中的部分剖面图像进行子宫内膜的检测,也可以达到子宫内膜自动成像的目的。
示例性的,超声成像设备在进行三维体数据采集时,医生通常以矢状面为起始切面扫查子宫区域而得到三维体数据。具体的,基于剖面检测子宫内膜的方法为首先在三维体数据中,获取识别出包括有子宫内膜的矢状面图像数据(A面),从矢状面图像数据中得到矢状面图像中子宫内膜的中心点,在该中心点处确定出与矢状面数据正交的包含子宫内膜的横切面图像数据(B面)。通过A、B面的检测,即可知道子宫内膜在两个正交面中的位置,该位置虽然没有包含全部的子宫内膜的目标区域,但也可以近似表达子宫内膜在三维体数据的空间中的位置,从而可以根据子宫内膜的位置信息自动成像。
可以理解的是,在本发明实施例中,虽然没有直接在三维体数据中进行三维的子宫内膜的精确检测,但是只需要在少数切面图像上(例如矢状面图像和横切面图像)进行自动检测就可以得到子宫内膜的大概位置信息,这样大大节约计算量。并且超声成像设备在获取矢状面图像数据的基础上通过获取横切面图像数据,纠正了采图时可能发生的翻转的情况。
需要说明的是,在本发明实施例中,基于剖面图像数据检测子宫内膜的方法与三维体数据中检测子宫内膜空间位置的方法类似,同样可以通过灰度和/或形态学等特征检测方法和机器学习或深度学习算法实现,此处不再赘述。
在本发明实施例中,无论是基于三维体数据直接检测到子宫内膜空间位置,还是直接在剖面图像数据中检测到子宫内膜的位置,其目的都是获取子宫内膜在三维体数据中的位置,将其作为后续成像的依据。
S105、根据子宫内膜的位置信息,基于三维体数据进行子宫内膜成像,得到子宫内膜图像。
超声成像设备在获取到了子宫内膜的位置信息之后,该超声设备可以根据子宫内膜的位置信息,基于三维体数据进行子宫内膜成像,得到子宫内膜图像。根据子宫内膜在三维体数据中的位置信息,进行子宫内膜成像时,超声成像设备可根据该位置信息从三维体数据中自动获取与子宫内膜相关的目标体数据,再结合所选定的成像方式对该目标体数据进行图像重建等处理,以得到对应的超声图像。
需要说明的是,超声成像设备在识别出子宫区域的子宫内膜的位置信息后,即识别出子宫区域的关键解剖结构后,就可以根据关键解剖结构在三维体数据中的位置实现子宫内膜自动成像。
本发明的超声成像设备为三维成像系统,可实现三种方式的子宫内膜自动成像:子宫内膜VR成像、子宫内膜CMPR成像、和子宫内膜标准切面成像。具体的成像方式本发明实施例不作限制。
需要说明的是,超声成像设备可根据子宫内膜的位置信息,从三维体数据中提取出包含有子宫内膜的矢状面切面图像,然后基于该矢状面切面图像进行VR成像和CMPR成像。
在本发明的一些实施例中,超声成像设备进行的VR成像是对VOI(Volume of interest)框内的区域进行渲染,VOI框通常为一个长方体。对子宫内膜进行VR成像时,还可将该长方体中的一个平面变成曲面,通过曲面更好地符合子宫内膜的弯曲结构。
进行子宫内膜VR成像时,可以根据子宫内膜的位置信息,从三维体数据中提取出包括有子宫内膜的矢状面切面图像;启用预设绘制框,并基于 预设绘制框,进行调节处理,以使预设绘制框覆盖住矢状面切面图像上的子宫内膜;对与预设绘制框对应的目标三维体数据进行图像绘制,得到三维子宫内膜图像,其中,目标三维体数据包含于子宫区域的三维体数据中。
在本发明实施例中,超声成像设备在进行VR成像的时候,在获取到了包括有子宫内膜的矢状面切面图像后,会启动预设绘制框,即在超声成像设备的显示器的矢状面切面图像上显示出预设绘制框,基于预设绘制框,进行调节处理,使得预设绘制框覆盖住矢状面切面图像上的子宫内膜,这时自动对预设绘制框内的区域对应的三维体数据中的目标三维体数据进行VR图像绘制。
需要说明的是,VR图像的获取需要调整三维体数据(包含子宫内膜体数据)的方位,或者设置VOI框的大小和位置,达到预设绘制框刚好覆盖住矢状面切面图像上的子宫内膜的目的。对于子宫内膜体数据,用户主要关注的是子宫内膜,因此在检测到子宫内膜等关键解剖结构后,可以根据子宫内膜的位置信息,自动调整三维体数据方位和大小,使得VOI框正好可以将子宫内膜区域包住。
在本发明实施例中,超声成像设备基于预设绘制框进行调节的时候,可以通过调节预设绘制框的大小和位置,使得预设绘制框覆盖住矢状面切面图像上的子宫内膜;也可以根据预设绘制框在矢状面切面图像上的方位,调节子宫区域的三维体数据的方位,从而使得预设绘制框覆盖住矢状面切面图像上的子宫内膜,还可以采用别的方式实现,本发明实施例不作限制。
具体的,超声成像设备可以根据子宫内膜的位置信息,确定子宫内膜在矢状面切面图像上的大小和位置,对应调节预设绘制框的大小和位置;和/或,根据子宫内膜的位置信息,确定子宫内膜在子宫区域的三维体数据中的方位,根据预设绘制框在矢状面切面图像上的方位,调节子宫区域的三维体数据的方位。
在本发明实施例中,预设绘制框为VOI(Volume of Interest)框,对于 三维立体成像,VR成像是对预设绘制框内的区域进行渲染,自动形成图像的。
需要说明的是,VOI还可将一个长方体中的一个平面变成曲面,其余5个面仍是长方体的5个面,通过曲面可以用来观察弯曲的组织结构。设置VOI框的目的是在对体数据进行立体渲染时只渲染VOI框内的区域,VOI框外的区域不进行渲染,即通过VR图像用户只能看到VOI框内的组织成像的图像。
进一步地,在本发明实施例中,VOI框的曲面尽可能与子宫内膜弯曲的下边缘重合,这样就可以渲染出子宫内膜冠状面图像。
示例性的,如图4所示,在子宫区域的三维体数据的矢状面切面图像中,在得知了子宫内膜的位置信息后,就可以启用预设绘制框1(VOI框),该预设绘制框1覆盖了子宫内膜,且VOI成像的曲面尽可能与子宫内膜下边缘重合,这样自动对预设绘制框1内的结构进行VR成像,得到了如图5所示的子宫内膜的冠状面图像。
这里,子宫内膜的冠状面信息除了通过三维重建出VR图进行显示外还可以使用CMPR进行显示。
需要说明的是,CMPR成像是在三维体数据的某个切面图像中取一个轨迹曲线,轨迹曲线将三维体数据剖开获得曲线的剖面图像,从而可以用来观察弯曲的组织结构的。由于通常子宫内膜的形状都具有一定弧度的曲线轨迹,因此直接取出三维体数据中的某个平面不能完整显示出子宫内膜的冠状面信息,CMPR切面可以很好地将整个子宫内膜轨迹覆盖,获取完整的冠状面图像。
在本发明实施例中,某个切面图像可以为矢状面切面图像,也可以为其他切面图像,本发明实施例不做限制。
具体的,超声成像设备根据子宫内膜的位置信息,从三维体数据中提取出包括有子宫内膜的矢状面切面图像,并在矢状面切面图像上自动生成 子宫内膜的轨迹线;根据轨迹线,对三维体数据进行子宫内膜曲面成像,得到子宫内膜图像。
需要说明的是,在本发明实施例中,轨迹线为曲线。
在本发明实施例中,超声成像设备在自动识别获得子宫内膜的位置信息后,根据子宫内膜的位置信息,在矢状面切面图像上自动生成一条足够贴合子宫内膜的CMPR轨迹线,实现自动化的子宫内膜CMPR成像。
因前端扫描操作的缘故,所获取的子宫区域的三维体数据中,子宫内膜可能有一定程度的扭转,此时需对三维体数据进行方位调整,以使得矢状面切面图像可以尽可能多地显示子宫内膜。通常地,子宫内膜在横切面上可得到一近似椭圆的图像,子宫内膜在横切面上的预设横切面位置可以为图示的水平位置,该水平位置例如可为图10的虚线所示的水平线。若子宫内膜发生扭转,则横切面上子宫内膜的图像会一定角度旋转,该椭圆图像的长轴不再是图示的水平线,而会有一定角度的倾斜。
如图10所示,白色实线表示子宫内膜横切面图像的长轴,其与虚线所示的水平线并未对齐,表明此时的子宫内膜发生了一定角度的扭转。根据横切面上子宫内膜的上述图像特征,本发明可对子宫区域的三维体数据进行方位调整。
在本发明的一些实施例中,子宫内膜的位置信息可以包括:子宫内膜在矢状面上的位置和子宫内膜在横切面上的位置。那么,超声成像设备在矢状面切面图像上自动生成子宫内膜的轨迹线的过程可以为:调节三维体数据的方位,直至将子宫内膜在横切面上的位置调整至符合预设横切面位置,例如该预设横切面位置可以为图10所示的水平位置;基于方位调节后的三维体数据,再确定子宫内膜在矢状面上的位置,随后可根据子宫内膜在矢状面上的位置,在矢状面切面图像上自动拟合出子宫内膜的轨迹线。
示例性的,超声成像设备分别获得子宫内膜在矢状面和横切面上的位 置,作为矢状面位置信息和横切面位置信息,然后可以根据横切面上的子宫内膜位置(横切面位置信息)将横切面的子宫内膜位置旋转至水平状态,该旋转操作同时也对子宫内膜在矢状面上的子宫内膜位置进行了调整,然后再根据调整后的矢状面上的子宫内膜位置(矢状面位置信息)拟合出一条CMPR曲线,该曲线正好经过子宫内膜的中心区域,此时再基于该CMPR曲线成像即得到了子宫内膜的CMPR图。
需要说明的是,为了更好显示子宫内膜,还可以继续旋转三维体数据或旋转CMPR图,将子宫内膜旋转至竖直状态,便于观察。
在本发明实施例中,为了提高CMPR图像的对比分辨率和信噪比,也可结合切片对比视图(SCV,Slice Contrast View)一起使用,SCV通过增加厚度调节,渲染厚度范围内的区域,可以提高图像的对比分辨率和信噪比。
在本发明的一些实施例中,超声成像设备根据子宫内膜的位置信息,从三维体数据中提取出包括有子宫内膜的矢状面切面图像,并在矢状面切面图像上自动生成子宫内膜的轨迹线;根据子宫内膜的位置信息,获取矢状面切面图像上子宫内膜的边缘信息;根据边缘信息和轨迹线,确定出图像绘制区域;对与图像绘制区域对应的目标三维体数据进行子宫内膜的曲面成像,得到反映子宫内膜厚度的子宫内膜图像。
需要说明的是,超声成像设备根据子宫内膜的位置信息,从三维体数据中提取出包括有子宫内膜的矢状面切面图像,并在矢状面切面图像上自动生成子宫内膜的轨迹线是一条单独的曲线,不能表征子宫内膜的厚度,这时,可以根据子宫内膜的位置信息,获取矢状面切面图像上子宫内膜的边缘信息,这样,将轨迹线和边缘信息之间的具有一定厚度的区域确定为图像绘制区域,只需将图像绘制区域对应的目标三维体数据进行子宫内膜的曲面成像,得到反映子宫内膜厚度的子宫内膜图像即可,由于这样得到是具有子宫内膜厚度的子宫内膜图像,提高了图像的分辨率。
示例性的,如图6所示,超声成像设备根据子宫内膜的位置信息,从三维体数据中提取出包括有子宫内膜的矢状面切面图像1,并在矢状面切面图像1上自动生成子宫内膜的轨迹线2;根据子宫内膜的位置信息,获取矢状面切面图像1上子宫内膜的边缘信息3;根据边缘信息3和轨迹线2,确定出图像绘制区域4;对与图像绘制区域4对应的目标三维体数据进行子宫内膜的曲面成像,得到反映子宫内膜厚度的子宫内膜图像(如图7所示)。
在本发明实施例中,超声成像设备还可以通过二维成像得到子宫内膜切面图像。基于三维体数据中检测到的子宫内膜的位置信息,还可以直接通过平面成像获取子宫内膜的标准切面。
例如,超声成像设备根据子宫内膜的位置信息,拟合出子宫内膜冠状面;从三维体数据中获取与子宫内膜冠状面对应的灰度图像;灰度图像作为子宫内膜的标准切面图像。这里的标准切面图像就是子宫内膜的冠状面切面图像。
需要说明的是,通常子宫内膜是一个曲型结构,用VR成像或CMPR能够更好表达曲型结构。但作为一种近似,也可以直接用平面来显示子宫内膜冠状面。
也就是说,超声成像设备在三维体数据中检测到子宫内膜的位置信息后,即可拟合出子宫内膜冠状面,使该平面通过子宫内膜区域并能够最大化显示子宫内膜(子宫内膜为一定厚度的片状物体,冠状面即为片状物体的中心平面)。平面的方程可以通过解方程或者最小二乘估计拟合得到。得到平面方程后,即可从三维体数据中取出该平面所对应的灰度图像,从而得到子宫内膜标准切面。同时还可以基于子宫内膜在三维体数据中的位置信息,对其角度偏差进行旋转和校正,最终得到子宫内膜的切面图像(二维平面)。
在本发明实施例中,以上的成像方法均可生成子宫内膜图像,可以独立使用,也可组合使用,本发明实施例不作限制。
需要说明的是,对于图像质量较差,通过算法检测到的子宫内膜的解剖结构位置有偏差的情况,也可以让用户通过键盘、鼠标等工具对检测到的切面中的VOI区域或CMPR曲线进行移动、缩放、删除重新标定等修改操作,实现半自动的VOI成像或CMPR曲面成像;对于子宫内膜标准切面平面成像,用户也可以通过旋钮对切面进行调整,本发明实施例不作限制。
S106、显示子宫内膜图像。
超声成像设备在获取了子宫内膜图像之后,超声成像设备在显示器显示子宫内膜图像,并且这些子宫内膜图像存储于存储器中。
在本发明实施例中,超声成像设备对子宫内膜进行VR自动成像时,使用光线跟踪等三维渲染算法得到子宫内膜VR图像,并显示在显示器上。
在本发明实施例中,超声成像设备对子宫内膜进行CMPR自动成像时,获得子宫内膜的CMPR图,并显示在显示器上。
在本发明实施例中,超声成像设备基于子宫内膜的标准切面自动成像获得子宫内膜标准切面图像。
在一些实施例中,可设置一定的工作流,将不同成像方式对应的功能集成到工作流中,供医生自由选择,并将选择的功能所对应的图像在显示器中进行显示。
示例性的,如图8所示,超声成像设备通过超声波得到子宫区域的三维体数据,并检测关键解剖结构,具体为:基于三维体数据进行特征识别,识别关键解剖结构(子宫内膜),即感兴趣区域,或者基于三维体数据的剖面图像,识别关键解剖结构,即感兴趣区域。在识别出子宫内膜后,采用子宫内膜VR自动成像、子宫内膜CMPR自动成像和子宫内膜的标准切面自动成像中的至少一种,得到子宫内膜图像(子宫内膜自动成像),并显示成像结果。例如,显示VR渲染成像结果、显示CMPR成像结果,或者显示子宫内膜的标准切面。
可以理解的是,超声成像设备可以通过对待检测对象的子宫区域的三 维体数据进行识别,识别出子宫内膜,从而得到子宫内膜的位置信息,进而自动成像得到子宫内膜切面图像,这样找到的子宫内膜的位置是准确的,提高了超声成像的准确度,并且可以自动成像还提高了超声波图像成像的智能性。
基于上述实现的基础上,以关键解剖结构为感兴趣区域为例,提供对感兴趣区域的超声成像方法,如图9所示,该方法可以包括:
S201、对待检测对象进行超声扫描,得到待检测对象的三维体数据。
S202、根据感兴趣区域的图像特征,从待检测对象的三维体数据中识别出感兴趣区域,得到感兴趣区域的位置信息。
S203、根据感兴趣区域的位置信息,对三维体数据进行处理,得到感兴趣区域图像。
S204、显示感兴趣区域图像。
在本发明实施例中,超声成像设备根据感兴趣区域的图像特征,从待检测对象的三维体数据中识别出感兴趣区域,得到感兴趣区域的位置信息,包括以下几种方式:
(1)、对三维体数据进行预设特征提取,以得到至少一个候选感兴趣区域;将至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的感兴趣区域,得到感兴趣区域的位置信息。
(2)、基于预设定位模型,对三维体数据进行处理,识别出待检测对象中的感兴趣区域,定位出感兴趣区域的位置信息;预设定位模型表征三维体数据与感兴趣区域的对应关系。
(3)、从三维体数据中,获取感兴趣区域的矢状面图像数据;根据矢状面图像数据,确定出感兴趣区域的中心点;基于中心点,获取与矢状面图像数据正交的横切面图像数据;基于横切面图像数据和矢状面图像数据,识别出感兴趣区域,得到感兴趣区域的位置信息。
在本发明实施例中,基于预设定位模型,对三维体数据进行处理,识 别出待检测对象中的所述感兴趣区域,定位出感兴趣区域的位置信息之前,需要先获取到预设定位模型。预设定位模型可预先构建完成,在成像过程调用已构建好的预设定位模型。其中,构建预设定位模型的过程可以包括:获取至少两个待训练对象的三维训练体数据和感兴趣区域;基于三维训练体数据和感兴趣区域,采用预设机器学习算法对训练模型进行训练,得到预设定位模型。
在本发明实施例中,超声成像设备根据感兴趣区域的位置信息,对三维体数据进行处理,得到感兴趣区域切面图像,包括以下几种:
(1)、获取预设绘制框;将预设绘制框覆盖感兴趣区域的位置信息对应的目标感兴趣区域;对与预设绘制框对应的目标三维体数据进行图像绘制,得到三维感兴趣区域图像,目标三维体数据包含于三维体数据中。
(2)、根据感兴趣区域的位置信息,生成感兴趣区域的轨迹线;根据轨迹线,对三维体数据进行感兴趣区域的图像绘制,得到感兴趣区域图像。
(3)、获取感兴趣区域的边缘信息;根据边缘信息和轨迹线,确定出图像绘制区域;根据图像绘制区域,对三维体数据进行感兴趣区域的图像绘制,得到三维感兴趣区域图像。
(4)、根据感兴趣区域的位置信息,拟合出感兴趣区域冠状面;从三维体数据中获取与感兴趣区域冠状面对应的灰度图像;灰度图像作为感兴趣区域的标准切面图像。
需要说明的是,感兴趣区域的位置信息可包括:矢状面位置信息和横切面位置信息;根据感兴趣区域的位置信息,生成感兴趣区域的轨迹线的过程为:将横切面位置信息旋转至与矢状面位置信息同一水平面,得到旋转横切面位置信息;根据旋转横切面位置信息和矢状面位置信息,拟合出感兴趣区域的轨迹线。
需要说明的是,S201-S204的实现过程的原理和实现方式与上述S101-S106的实现原理一致,此处不再赘述。
本发明实施例提供了一种超声成像设备,如图1所示,该超声成像设备包括:
探头100;
发射电路101,用于激励该探头100向待检测对象发射超声波;
发射/接收选择开关102;
接收电路103,用于通过该探头100接收从该子待检测对象返回的超声回波,从而获得超声回波信号/数据;
波束合成电路104,用于对该超声回波信号/数据进行波束合成处理,获得波束合成后的超声回波信号/数据;
处理器105,用于对所述超声回波信号进行处理,得到所述待检测对象的子宫区域的三维体数据;根据子宫区域的子宫内膜的图像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息;根据所述子宫内膜的位置信息,基于所述三维体数据进行子宫内膜成像,得到子宫内膜图像;
显示器106,用于显示所述子宫内膜图像。
在本发明的一些实施例中,所述处理器105,可用于根据所述子宫区域的子宫内膜与子宫基层组织的图像特征差异、和/或根据所述子宫区域的子宫内膜的可周期性变化的形态特征,从所述子宫区域的三维体数据中识别出所述子宫内膜,得到所述子宫内膜的位置信息。
在本发明的一些实施例中,所述处理器105,可用于对所述子宫区域的三维体数据进行预设特征提取,得到至少一个候选感兴趣区域;获取已识别出所述子宫内膜的子宫区域的三维模板数据,根据该三维模板数据,获得子宫内膜的预设模板区域;将所述至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的候选感兴趣区域作为所述待检测对象的子宫内膜的目标区域,并根据子宫内膜的目标区域在所述三维体数据中的位置,得到所述子宫内膜的位置信息。
在本发明的一些实施例中,所述处理器105,还可用于提取所述至少一个候选感兴趣区域的特征指数,所述特征指数包括形状特征、纹理特征、边界特征或灰度分布特征;基于所述特征指数,计算所述至少一个候选感兴趣区域与所述预设模板区域的相关度;以及,将相关度最高且相关度超过预设阈值的所述候选感兴趣区域作为所述待检测对象的所述子宫内膜的目标区域。
在本发明的一些实施例中,所述处理器105,可用于对所述子宫区域的三维体数据进行图像分割,并对图像分割结果进行形态学操作处理,得到具有完整边界的所述至少一个候选感兴趣区域。
在本发明的一些实施例中,所述处理器105,可用于获取预设定位模型,所述预设定位模型包括已识别出所述子宫内膜的子宫区域的三维正样本数据、以及子宫内膜在该三维正样本数据中的标定信息;以及,基于所述预设定位模型中子宫内膜的标定信息,从所述待检测对象的子宫区域的三维体数据中识别出所述子宫内膜,定位出所述子宫内膜的位置信息。
在本发明的一些实施例中,所述处理器105,还可用于利用所述预设定位模型中子宫内膜的标定信息,通过深度学习或机器学习的方法学习得到子宫内膜的图像特征规律;基于所述子宫内膜的图像特征规律,从所述待检测对象的子宫区域的三维体数据中提取出含子宫内膜的目标区域,并输出该目标区域在三维体数据中的位置信息,作为所述子宫内膜的位置信息。
在本发明的一些实施例中,所述处理器105,还可用于获取至少两个待训练对象的三维训练体数据,所述三维训练体数据至少包括所述已识别出子宫内膜的子宫区域的三维正样本数据;在所述三维训练体数据中标定出子宫内膜或子宫内膜的关联解剖结构,作为所述子宫内膜在该三维训练体数据中的标定信息;以及,基于所述三维训练体数据和所述子 宫内膜的标定信息,采用机器学习或深度学习的方法进行模型训练,得到所述预设定位模型。
在本发明的一些实施例中,所述处理器105,可用于从所述子宫区域的三维体数据中,获取识别出包括有子宫内膜的矢状面图像数据;根据所述矢状面图像数据,确定出子宫内膜的中心点;基于所述中心点,获取与所述矢状面图像数据正交的、且识别出包括有子宫内膜的横切面图像数据;基于识别出包括有所述子宫内膜的所述横切面图像数据和所述矢状面图像数据在所述子宫区域的三维体数据中的位置,得到所述子宫内膜的位置信息。
在本发明的一些实施例中,所述处理器105,可用于根据所述子宫内膜的位置信息,从所述三维体数据中提取出包括有子宫内膜的矢状面切面图像;启用并调节预设绘制框,以使预设绘制框覆盖住所述矢状面切面图像上的子宫内膜;以及,对与所述预设绘制框对应的目标三维体数据进行图像绘制,得到三维子宫内膜图像,所述目标三维体数据包含于所述子宫区域的三维体数据中。
在本发明的一些实施例中,所述处理器105,还可用于根据所述子宫内膜的位置信息,确定所述子宫内膜在矢状面切面图像上的大小和位置,对应调节预设绘制框的大小和位置;和/或,根据所述子宫内膜的位置信息,确定所述子宫内膜在所述子宫区域的三维体数据中的方位,根据所述预设绘制框在所述矢状面切面图像上的方位,调节所述子宫区域的三维体数据的方位。
在本发明的一些实施例中,所述处理器105,可用于根据所述子宫内膜的位置信息,从所述三维体数据中提取出包括有子宫内膜的矢状面切面图像,并在所述矢状面切面图像上自动生成子宫内膜的轨迹线;以及,根据所述轨迹线,对所述三维体数据进行子宫内膜曲面成像,得到所述子宫内膜图像。
在本发明的一些实施例中,所述子宫内膜的位置信息包括:矢状面位置信息和横切面位置信息;
所述处理器105,还可用于调节三维体数据的方位,直至将子宫内膜在横切面上的位置调整至符合预设横切面位置,例如该预设横切面位置可以为图10所示的水平位置;基于方位调节后的三维体数据,再确定子宫内膜在矢状面上的位置,随后可根据子宫内膜在矢状面上的位置,在矢状面切面图像上自动拟合出子宫内膜的轨迹线。
在本发明的一些实施例中,所述处理器105,还可用于所述在所述矢状面切面图像上自动生成子宫内膜的轨迹线之后,根据所述子宫内膜的位置信息,获取所述矢状面切面图像上所述子宫内膜的边缘信息;根据所述边缘信息和所述轨迹线,确定出图像绘制区域;对与所述图像绘制区域对应的目标三维体数据进行子宫内膜的曲面成像,得到反映子宫内膜厚度的三维子宫内膜图像。
在本发明的一些实施例中,所述处理器105,可用于根据所述子宫内膜的位置信息,拟合出子宫内膜冠状面;从所述三维体数据中获取与所述子宫内膜冠状面对应的灰度图像;所述灰度图像作为子宫内膜的标准切面图像。
可以理解的是,超声成像设备可以通过对待检测对象的子宫区域的三维体数据进行识别,识别出子宫内膜,从而得到子宫内膜的位置信息,省去了需要用户不断手动进行子宫内膜定位的繁琐操作,便于用户快速识别子宫内膜,提高整体工作效率。超声成像设备还可以根据子宫内膜的位置信息自动成像得到子宫内膜图像,鉴于自动识别的子宫内膜的位置是准确的,提高了超声成像的准确度,并且可以自动成像还提高了超声波图像成像的智能性。
本发明实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有超声成像程序,该超声成像程序可以被处理器执行,以实现上述 超声成像方法。
其中,计算机可读存储介质可以是是易失性存储器(volatile memory),例如随机存取存储器(Random-Access Memory,RAM);或者非易失性存储器(non-volatile memory),例如只读存储器(Read-Only Memory,ROM),快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);也可以是包括上述存储器之一或任意组合的各自设备,如移动电话、计算机、平板设备、个人数字助理等。

Claims (42)

  1. 一种超声成像方法,其特征在于,所述方法包括:
    发射超声波至待检测对象的子宫区域进行体扫描;
    接收从所述待检测对象的子宫区域返回的超声回波,并基于所述超声回波获取超声回波信号;
    对所述超声回波信号进行处理,得到所述待检测对象的子宫区域的三维体数据;
    根据子宫区域的子宫内膜的图像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息;
    根据所述子宫内膜的位置信息,基于所述三维体数据进行子宫内膜成像,得到子宫内膜图像;以及,
    显示所述子宫内膜图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据子宫区域的子宫内膜的图像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息,包括:
    根据所述子宫区域的子宫内膜与子宫基层组织的图像特征差异、和/或根据所述子宫区域的子宫内膜的可周期性变化的形态特征,从所述子宫区域的三维体数据中识别出所述子宫内膜,得到所述子宫内膜的位置信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据子宫区域的子宫内膜的图像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息,包括:
    对所述子宫区域的三维体数据进行预设特征提取,得到至少一个候选感兴趣区域;
    获取已识别出所述子宫内膜的子宫区域的三维模板数据,根据该三 维模板数据,获得子宫内膜的预设模板区域;
    将所述至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的候选感兴趣区域作为所述待检测对象的子宫内膜的目标区域,并根据子宫内膜的目标区域在所述三维体数据中的位置,得到所述子宫内膜的位置信息。
  4. 根据权利要求3所述的方法,其特征在于,所述将所述至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的候选感兴趣区域作为所述待检测对象的子宫内膜的目标区域,包括:
    提取所述至少一个候选感兴趣区域的特征指数,所述特征指数包括形状特征、纹理特征、边界特征或灰度分布特征;
    基于所述特征指数,计算所述至少一个候选感兴趣区域与所述预设模板区域的相关度;以及,
    将相关度最高且相关度超过预设阈值的候选感兴趣区域作为所述待检测对象的子宫内膜的目标区域。
  5. 根据权利要求3所述的方法,其特征在于,所述对所述子宫区域的三维体数据进行预设特征提取,得到至少一个候选感兴趣区域,包括:
    对所述子宫区域的三维体数据进行图像分割,并对图像分割结果进行形态学操作处理,得到具有完整边界的所述至少一个候选感兴趣区域。
  6. 根据权利要求1或2所述的方法,其特征在于,所述根据子宫区域的子宫内膜的图像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息,包括:
    获取预设定位模型,所述预设定位模型包括已识别出所述子宫内膜的子宫区域的三维正样本数据、以及子宫内膜在该三维正样本数据中的标定信息;以及,
    基于所述预设定位模型中子宫内膜的标定信息,从所述待检测对象的子宫区域的三维体数据中识别出所述子宫内膜,定位出所述子宫内膜 的位置信息。
  7. 根据权利要求6所述的方法,其特征在于,所述基于预设定位模型中子宫内膜的标定信息,从所述待检测对象的子宫区域的三维体数据中识别出所述子宫内膜,定位出所述子宫内膜的位置信息,包括:
    利用所述预设定位模型中子宫内膜的标定信息,通过深度学习的方法学习得到子宫内膜的图像特征规律;
    基于所述子宫内膜的图像特征规律,从所述待检测对象的子宫区域的三维体数据中提取出含子宫内膜的目标区域,并输出该目标区域在三维体数据中的位置信息,作为所述子宫内膜的位置信息。
  8. 根据权利要求6所述的方法,其特征在于,所述获取预设定位模型,包括:
    获取至少两个待训练对象的三维训练体数据,所述三维训练体数据至少包括所述已识别出子宫内膜的子宫区域的三维正样本数据;
    在所述三维训练体数据中标定出子宫内膜或子宫内膜的关联解剖结构,作为所述子宫内膜在该三维训练体数据中的标定信息;以及,
    基于所述三维训练体数据和所述子宫内膜的标定信息,采用机器学习或深度学习的方法进行模型训练,得到所述预设定位模型。
  9. 根据权利要求1或2所述的方法,其特征在于,所述根据子宫区域的子宫内膜的图像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息,包括:
    从所述子宫区域的三维体数据中,获取识别出包括有子宫内膜的矢状面图像数据;
    根据所述矢状面图像数据,确定出子宫内膜的中心点;
    基于所述中心点,获取与所述矢状面图像数据正交的、且识别出包括有子宫内膜的横切面图像数据;
    基于识别出包括有所述子宫内膜的所述横切面图像数据和所述矢状 面图像数据在所述子宫区域的三维体数据中的位置,得到所述子宫内膜的位置信息。
  10. 根据权利要求1或2所述的方法,其特征在于,所述根据所述子宫内膜的位置信息,基于所述三维体数据进行子宫内膜成像,得到子宫内膜图像,包括:
    根据所述子宫内膜的位置信息,从所述三维体数据中获取与所述子宫内膜相关的目标体数据,基于该目标体数据进行成像,得到所述子宫内膜图像。
  11. 根据权利要求1至10任一项所述的方法,其特征在于,所述根据所述子宫内膜的位置信息,基于所述三维体数据进行子宫内膜成像,得到子宫内膜图像,包括:
    根据所述子宫内膜的位置信息,从所述三维体数据中提取出包括有子宫内膜的矢状面切面图像;
    启用预设绘制框,并基于所述预设绘制框进行调节处理,以使所述预设绘制框覆盖住所述矢状面切面图像上的子宫内膜;以及,
    对与所述预设绘制框对应的目标三维体数据进行图像绘制,得到三维子宫内膜图像,所述目标三维体数据包含于所述子宫区域的三维体数据中。
  12. 根据权利要求11所述的方法,其特征在于,所述基于所述预设绘制框进行调节处理,包括:
    根据所述子宫内膜的位置信息,确定所述子宫内膜在矢状面切面图像上的大小和位置,对应调节预设绘制框的大小和位置;
    和/或,根据所述子宫内膜的位置信息,确定所述子宫内膜在所述子宫区域的三维体数据中的方位,根据所述预设绘制框在所述矢状面切面图像上的方位,调节所述子宫区域的三维体数据的方位。
  13. 根据权利要求1至10任一项所述的方法,其特征在于,所述根 据所述子宫内膜的位置信息,基于所述三维体数据进行子宫内膜成像,得到子宫内膜图像,包括:
    根据所述子宫内膜的位置信息,从所述三维体数据中提取出包括有子宫内膜的矢状面切面图像,并在所述矢状面切面图像上自动生成子宫内膜的轨迹线;以及,
    根据所述轨迹线,对所述三维体数据进行子宫内膜曲面成像,得到所述子宫内膜图像。
  14. 根据权利要求13所述的方法,其特征在于,所述子宫内膜的位置信息包括:所述子宫内膜在矢状面上的位置和所述子宫内膜在横切面上的位置;所述在所述矢状面切面图像上自动生成子宫内膜的轨迹线,包括:
    调节所述子宫区域的三维体数据的方位至所述子宫内膜在横切面上的位置符合预设横切面位置;
    基于方位调节后的所述子宫区域的三维体数据确定所述子宫内膜在矢状面上的位置,并根据子宫内膜在矢状面上的位置在所述矢状面切面图像上自动拟合出所述子宫内膜的所述轨迹线。
  15. 根据权利要求13或14所述的方法,其特征在于,所述在所述矢状面切面图像上自动生成子宫内膜的轨迹线之后,所述方法还包括:
    根据所述子宫内膜的位置信息,获取所述矢状面切面图像上所述子宫内膜的边缘信息;
    根据所述边缘信息和所述轨迹线,确定出图像绘制区域;
    对与所述图像绘制区域对应的目标三维体数据进行子宫内膜的曲面成像,得到反映子宫内膜厚度的子宫内膜图像。
  16. 根据权利要求1至10任一项所述的方法,所述根据所述子宫内膜的位置信息,对所述三维体数据进行处理,得到子宫内膜切面图像,包括:
    根据所述子宫内膜的位置信息,拟合出子宫内膜冠状面;
    从所述三维体数据中获取与所述子宫内膜冠状面对应的灰度图像;
    所述灰度图像作为子宫内膜的标准切面图像。
  17. 一种超声成像方法,其特征在于,包括:
    对待检测对象进行超声体扫描,得到所述待检测对象的三维体数据;
    根据所述待检测对象中感兴趣区域的图像特征,从所述待检测对象的三维体数据中识别出感兴趣区域,得到所述感兴趣区域的位置信息;
    根据所述感兴趣区域的位置信息,对所述三维体数据进行处理,得到感兴趣区域图像;
    显示所述感兴趣区域图像。
  18. 根据权利要求17所述的方法,其特征在于,所述根据所述待检测对象中感兴趣区域的图像特征,从所述待检测对象的三维体数据中识别出感兴趣区域,得到所述感兴趣区域的位置信息,包括:
    对所述三维体数据进行预设特征提取,以得到至少一个候选感兴趣区域;
    将所述至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的感兴趣区域,得到所述感兴趣区域的位置信息。
  19. 根据权利要求17所述的方法,其特征在于,所述根据所述待检测对象中感兴趣区域的图像特征,从所述待检测对象的三维体数据中识别出感兴趣区域,得到所述感兴趣区域的位置信息,包括:
    基于预设定位模型,对所述三维体数据进行处理,识别出所述待检测对象中的所述感兴趣区域,定位出所述感兴趣区域的位置信息;所述预设定位模型表征三维体数据与感兴趣区域的对应关系。
  20. 根据权利要求19所述的方法,其特征在于,所述基于预设定位模型,对所述三维体数据进行处理,识别出所述待检测对象中的所述感兴趣区域,定位出所述感兴趣区域的位置信息之前,所述方法还包括:
    获取至少两个待训练对象的三维训练体数据和感兴趣区域;
    基于所述三维训练体数据和感兴趣区域,采用预设机器学习算法对训练模型进行训练,得到所述预设定位模型。
  21. 根据权利要求17所述的方法,其特征在于,所述根据所述待检测对象中感兴趣区域的图像特征,从所述待检测对象的三维体数据中识别出感兴趣区域,得到所述感兴趣区域的位置信息,包括:
    从所述三维体数据中,获取所述感兴趣区域的矢状面图像数据;
    根据所述矢状面图像数据,确定出感兴趣区域的中心点;
    基于所述中心点,获取与所述矢状面图像数据正交的横切面图像数据;
    基于所述横切面图像数据和所述矢状面图像数据,识别出所述感兴趣区域,得到所述感兴趣区域的位置信息。
  22. 根据权利要求17所述的方法,其特征在于,所述根据所述感兴趣区域的位置信息,对所述三维体数据进行处理,得到感兴趣区域图像,包括:
    获取预设绘制框;
    将所述预设绘制框覆盖所述感兴趣区域的位置信息对应的目标感兴趣区域;
    对与所述预设绘制框对应的目标三维体数据进行图像绘制,得到三维感兴趣区域图像,所述目标三维体数据包含于所述三维体数据中。
  23. 根据权利要求17所述的方法,其特征在于,所述根据所述感兴趣区域的位置信息,对所述三维体数据进行处理,得到感兴趣区域图像,包括:
    根据所述感兴趣区域的位置信息,生成感兴趣区域的轨迹线;
    根据所述轨迹线,对所述三维体数据进行感兴趣区域的图像绘制,得到所述感兴趣区域图像。
  24. 根据权利要求23所述的方法,其特征在于,所述感兴趣区域的位置信包括:矢状面位置信息和横切面位置信息;所述根据所述感兴趣区域的位置信息,生成感兴趣区域的轨迹线,包括:
    将所述横切面位置信息旋转至与所述矢状面位置信息同一水平面,得到旋转横切面位置信息;
    根据所述旋转横切面位置信息和所述矢状面位置信息,拟合出所述感兴趣区域的所述轨迹线。
  25. 根据权利要求23或24所述的方法,其特征在于,所述根据所述轨迹线,对所述三维体数据进行感兴趣区域的图像绘制,得到所述感兴趣区域图像,包括:
    获取所述感兴趣区域的边缘信息;
    根据所述边缘信息和所述轨迹线,确定出图像绘制区域;
    根据所述图像绘制区域,对所述三维体数据进行感兴趣区域的图像绘制,得到所述感兴趣区域图像。
  26. 根据权利要求17所述的方法,其特征在于,所述根据所述感兴趣区域的位置信息,对所述三维体数据进行处理,得到感兴趣区域图像,包括:
    根据所述感兴趣区域的位置信息,拟合出感兴趣区域冠状面;
    从所述三维体数据中获取与所述感兴趣区域冠状面对应的灰度图像;
    所述灰度图像作为感兴趣区域的标准切面图像。
  27. 一种超声成像设备,其特征在于,所述超声成像设备包括:
    探头;
    发射电路,用于激励所述探头向待检测对象发射超声波以进行体扫描;
    发射/接收选择开关;
    接收电路,用于通过所述探头接收从所述待检测对象返回的超声回 波,从而获得超声回波信号/数据;
    波束合成电路,用于对所述超声回波信号/数据进行波束合成处理,获得波束合成后的超声回波信号/数据;
    处理器,用于对所述波束合成后的超声回波信号进行处理,得到所述待检测对象的子宫区域的三维体数据;根据子宫区域的子宫内膜的图像特征,从所述子宫区域的三维体数据中识别出子宫内膜,得到所述子宫内膜的位置信息;根据所述子宫内膜的位置信息,基于所述三维体数据进行子宫内膜成像,得到子宫内膜图像;
    显示器,用于显示所述子宫内膜图像。
  28. 根据权利要求27所述的设备,其特征在于,所述处理器用于:
    根据所述子宫区域的子宫内膜与子宫基层组织的图像特征差异、和/或根据所述子宫区域的子宫内膜的可周期性变化的形态特征,从所述子宫区域的三维体数据中识别出所述子宫内膜,得到所述子宫内膜的位置信息。
  29. 根据权利要求27或28所述的设备,其特征在于,所述处理器用于:
    对所述子宫区域的三维体数据进行预设特征提取,得到至少一个候选感兴趣区域;获取已识别出所述子宫内膜的子宫区域的三维模板数据,根据该三维模板数据,获得子宫内膜的预设模板区域;将所述至少一个候选感兴趣区域和预设模板区域进行匹配,识别出匹配度最高的候选感兴趣区域作为所述待检测对象的子宫内膜的目标区域,并根据子宫内膜的目标区域在所述三维体数据中的位置,得到所述子宫内膜的位置信息。
  30. 根据权利要求29所述的设备,其特征在于,
    所述处理器还用于提取所述至少一个候选感兴趣区域的特征指数,所述特征指数包括形状特征、纹理特征、边界特征或灰度分布特征;基于所述特征指数,计算所述至少一个候选感兴趣区域与所述预设模板区 域的相关度;以及,将相关度最高且相关度超过预设阈值的所述候选感兴趣区域作为所述待检测对象的所述子宫内膜的目标区域。
  31. 根据权利要求29所述的设备,其特征在于,
    所述处理器用于对所述子宫区域的三维体数据进行图像分割,并对图像分割结果进行形态学操作处理,得到具有完整边界的所述至少一个候选感兴趣区域。
  32. 根据权利要求27或28所述的设备,其特征在于,
    所述处理器用于获取预设定位模型,所述预设定位模型包括已识别出所述子宫内膜的子宫区域的三维模板数据、以及子宫内膜在该三维模板数据中的标定信息;以及,基于所述预设定位模型中子宫内膜的标定信息,从所述待检测对象的子宫区域的三维体数据中识别出所述子宫内膜,定位出所述子宫内膜的位置信息。
  33. 根据权利要求32所述的设备,其特征在于,
    所述处理器还用于利用所述预设定位模型中子宫内膜的标定信息,通过深度学习或机器学习的方法学习得到子宫内膜的图像特征规律;基于所述子宫内膜的图像特征规律,从所述待检测对象的子宫区域的三维体数据中提取出含子宫内膜的目标区域,并输出该目标区域在三维体数据中的位置信息,作为所述子宫内膜的位置信息。
  34. 根据权利要求32所述的设备,其特征在于,
    所述处理器还用于获取至少两个待训练对象的三维训练体数据,所述三维训练体数据至少包括所述已识别出子宫内膜的子宫区域的三维模板数据;在所述三维训练体数据中标定出子宫内膜或子宫内膜的关联解剖结构,作为所述子宫内膜在该三维训练体数据中的标定信息;以及,基于所述三维训练体数据和所述子宫内膜的标定信息,采用机器学习或深度学习的方法对训练模型进行训练,得到所述预设定位模型。
  35. 根据权利要求27或28所述的设备,其特征在于,
    所述处理器用于从所述子宫区域的三维体数据中,获取识别出包括有子宫内膜的矢状面图像数据;根据所述矢状面图像数据,确定出子宫内膜的中心点;基于所述中心点,获取与所述矢状面图像数据正交的、且识别出包括有子宫内膜的横切面图像数据;基于识别出包括有所述子宫内膜的所述横切面图像数据和所述矢状面图像数据在所述子宫区域的三维体数据中的位置,得到所述子宫内膜的位置信息。
  36. 根据权利要求27或28所述的设备,其特征在于,
    所述处理器用于根据所述子宫内膜的位置信息,从所述三维体数据中提取出包括有子宫内膜的矢状面切面图像;启用并调节预设绘制框,以使预设绘制框覆盖住所述矢状面切面图像上的子宫内膜;以及,对与所述预设绘制框对应的目标三维体数据进行图像绘制,得到三维子宫内膜切面图像,所述目标三维体数据包含于所述子宫区域的三维体数据中。
  37. 根据权利要求36所述的设备,其特征在于,
    所述处理器还用于根据所述子宫内膜的位置信息,确定所述子宫内膜在矢状面切面图像上的大小和位置,对应调节预设绘制框的大小和位置;和/或,根据所述子宫内膜的位置信息,确定所述子宫内膜在所述子宫区域的三维体数据中的方位,根据所述预设绘制框在所述矢状面切面图像上的方位,调节所述子宫区域的三维体数据的方位。
  38. 根据权利要求27或28所述的设备,其特征在于,
    所述处理器用于根据所述子宫内膜的位置信息,从所述三维体数据中提取出包括有子宫内膜的矢状面切面图像,并在所述矢状面切面图像上自动生成子宫内膜的轨迹线;以及,根据所述轨迹线,对所述三维体数据进行子宫内膜曲面成像,得到所述子宫内膜切面图像。
  39. 根据权利要求38所述的设备,其特征在于,所述子宫内膜的位置信包括:矢状面位置信息和横切面位置信息;
    所述处理器还用于调节所述子宫区域的三维体数据的方位至所述子 宫内膜在横切面上的位置符合预设横切面位置;一级基于方位调节后的所述子宫区域的三维体数据确定所述子宫内膜在矢状面上的位置,并根据子宫内膜在矢状面上的位置在所述矢状面切面图像上自动拟合出所述子宫内膜的所述轨迹线。
  40. 根据权利要求38或39所述的设备,其特征在于,
    所述处理器还用于在所述矢状面切面图像上自动生成子宫内膜的轨迹线之后,根据所述子宫内膜的位置信息,获取所述矢状面切面图像上所述子宫内膜的边缘信息;根据所述边缘信息和所述轨迹线,确定出图像绘制区域;对与所述图像绘制区域对应的目标三维体数据进行子宫内膜的曲面成像,得到反映子宫内膜厚度的子宫内膜切面图像。
  41. 根据权利要求27或28所述的设备,其特征在于,
    所述处理器用于根据所述子宫内膜的位置信息,拟合出子宫内膜冠状面;从所述三维体数据中获取与所述子宫内膜冠状面对应的灰度图像;所述灰度图像作为子宫内膜的标准切面图像。
  42. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有超声成像程序,所述超声成像程序可以被处理器执行,以实现权利要求1-16任一项所述的超声成像方法。
PCT/CN2018/125832 2018-12-29 2018-12-29 一种超声成像方法及设备 WO2020133510A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201880097250.8A CN112672691B (zh) 2018-12-29 2018-12-29 一种超声成像方法及设备
CN202311266520.2A CN117338340A (zh) 2018-12-29 2018-12-29 一种超声成像方法及设备
PCT/CN2018/125832 WO2020133510A1 (zh) 2018-12-29 2018-12-29 一种超声成像方法及设备
CN202311248499.3A CN117338339A (zh) 2018-12-29 2018-12-29 一种超声成像方法及设备
US17/359,615 US20210393240A1 (en) 2018-12-29 2021-06-27 Ultrasonic imaging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/125832 WO2020133510A1 (zh) 2018-12-29 2018-12-29 一种超声成像方法及设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/359,615 Continuation US20210393240A1 (en) 2018-12-29 2021-06-27 Ultrasonic imaging method and device

Publications (1)

Publication Number Publication Date
WO2020133510A1 true WO2020133510A1 (zh) 2020-07-02

Family

ID=71127463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/125832 WO2020133510A1 (zh) 2018-12-29 2018-12-29 一种超声成像方法及设备

Country Status (3)

Country Link
US (1) US20210393240A1 (zh)
CN (3) CN117338340A (zh)
WO (1) WO2020133510A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508941A (zh) * 2020-12-25 2021-03-16 上海深博医疗器械有限公司 三维超声扫描完整性检测方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222956B (zh) * 2021-05-25 2023-09-15 南京大学 基于超声图像识别血管中斑块的方法
CN113520317A (zh) * 2021-07-05 2021-10-22 汤姆飞思(香港)有限公司 基于oct的子宫内膜检测分析方法、装置、设备及存储介质
US11657504B1 (en) 2022-06-28 2023-05-23 King Abdulaziz University System and method for computationally efficient artificial intelligence based point-of-care ultrasound imaging healthcare support
CN115953555B (zh) * 2022-12-29 2023-08-22 南京鼓楼医院 一种基于超声测量值的子宫腺肌病建模方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101103924A (zh) * 2007-07-13 2008-01-16 华中科技大学 基于乳腺x线摄片的乳腺癌计算机辅助诊断方法及其系统
CN101938953A (zh) * 2008-01-09 2011-01-05 精光股份有限公司 辅助乳房外科手术的解剖学识别和空间分析
US20130150718A1 (en) * 2011-12-07 2013-06-13 General Electric Company Ultrasound imaging system and method for imaging an endometrium
CN104657984A (zh) * 2015-01-28 2015-05-27 复旦大学 三维超声乳腺全容积图像感兴趣区域的自动提取方法
CN105433980A (zh) * 2015-11-20 2016-03-30 深圳开立生物医疗科技股份有限公司 一种超声成像方法、装置及其超声设备
CN108921181A (zh) * 2018-08-02 2018-11-30 广东工业大学 一种局部图像特征提取方法、装置、系统及可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003270654A1 (en) * 2002-09-12 2004-04-30 Baylor College Of Medecine System and method for image segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101103924A (zh) * 2007-07-13 2008-01-16 华中科技大学 基于乳腺x线摄片的乳腺癌计算机辅助诊断方法及其系统
CN101938953A (zh) * 2008-01-09 2011-01-05 精光股份有限公司 辅助乳房外科手术的解剖学识别和空间分析
US20130150718A1 (en) * 2011-12-07 2013-06-13 General Electric Company Ultrasound imaging system and method for imaging an endometrium
CN104657984A (zh) * 2015-01-28 2015-05-27 复旦大学 三维超声乳腺全容积图像感兴趣区域的自动提取方法
CN105433980A (zh) * 2015-11-20 2016-03-30 深圳开立生物医疗科技股份有限公司 一种超声成像方法、装置及其超声设备
CN108921181A (zh) * 2018-08-02 2018-11-30 广东工业大学 一种局部图像特征提取方法、装置、系统及可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508941A (zh) * 2020-12-25 2021-03-16 上海深博医疗器械有限公司 三维超声扫描完整性检测方法及装置

Also Published As

Publication number Publication date
CN112672691A (zh) 2021-04-16
CN112672691B (zh) 2024-03-29
US20210393240A1 (en) 2021-12-23
CN117338339A (zh) 2024-01-05
CN117338340A (zh) 2024-01-05

Similar Documents

Publication Publication Date Title
WO2020133510A1 (zh) 一种超声成像方法及设备
US11229419B2 (en) Method for processing 3D image data and 3D ultrasonic imaging method and system
CN107480677B (zh) 一种识别三维ct图像中感兴趣区域的方法及装置
CN110177504B (zh) 超声图像中参数测量的方法和超声成像系统
US11464490B2 (en) Real-time feedback and semantic-rich guidance on quality ultrasound image acquisition
CN110945560B (zh) 胎儿超声图像处理
TWI501754B (zh) 影像辨識方法及影像辨識系統
CN111374708B (zh) 一种胎儿心率检测方法及超声成像装置、存储介质
CN112568933A (zh) 超声成像方法、设备和存储介质
CN112998755A (zh) 解剖结构的自动测量方法和超声成像系统
WO2022099704A1 (zh) 中晚孕期胎儿的超声成像方法和超声成像系统
CN115813433A (zh) 基于二维超声成像的卵泡测量方法和超声成像系统
CN111383323B (zh) 一种超声成像方法和系统以及超声图像处理方法和系统
CN113229850A (zh) 超声盆底成像方法和超声成像系统
WO2020133236A1 (zh) 一种脊柱的成像方法以及超声成像系统
WO2020132953A1 (zh) 一种成像方法及超声成像设备
CN113974688B (zh) 超声成像方法和超声成像系统
WO2022134049A1 (zh) 胎儿颅骨的超声成像方法和超声成像系统
JP7299100B2 (ja) 超音波診断装置及び超音波画像処理方法
Doerfler et al. Blood vessel detection in navigated ultrasound: An assistance system for liver resections
WO2024104388A1 (zh) 一种超声图像处理方法、装置、电子设备及存储介质
CN115886876A (zh) 胎儿姿态的评估方法、超声成像方法及超声成像系统
CN117224251A (zh) 目标尺寸测量方法、装置及设备
WO2020037563A1 (zh) 一种超声成像的方法及相关设备
CN116965852A (zh) 盆腔的超声测量方法和超声成像系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944437

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/11/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18944437

Country of ref document: EP

Kind code of ref document: A1