WO2020132953A1 - Procédé d'imagerie et dispositif d'imagerie ultrasonore - Google Patents

Procédé d'imagerie et dispositif d'imagerie ultrasonore Download PDF

Info

Publication number
WO2020132953A1
WO2020132953A1 PCT/CN2018/123946 CN2018123946W WO2020132953A1 WO 2020132953 A1 WO2020132953 A1 WO 2020132953A1 CN 2018123946 W CN2018123946 W CN 2018123946W WO 2020132953 A1 WO2020132953 A1 WO 2020132953A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
imaging
image
region
parameter
Prior art date
Application number
PCT/CN2018/123946
Other languages
English (en)
Chinese (zh)
Inventor
林穆清
张明
王丰
邹耀贤
陆婷
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to CN201880097321.4A priority Critical patent/CN112654298A/zh
Priority to PCT/CN2018/123946 priority patent/WO2020132953A1/fr
Publication of WO2020132953A1 publication Critical patent/WO2020132953A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves

Definitions

  • the invention relates to the medical field, in particular to an imaging method and an ultrasound imaging device.
  • medical ultrasound images Due to the characteristics of non-invasive, low-cost and real-time image display, medical ultrasound images have been more and more widely used in clinics.
  • Specific medical ultrasound imaging uses ultrasound echo signals to detect structural information of tissues, and through two-dimensional images The structural information of the tissue is displayed in real time, so that the doctor can identify the structural information in the two-dimensional image to provide a basis for clinical diagnosis.
  • the current mainstream medical ultrasound imaging technology is all-region image imaging technology. This technology uses the same imaging parameters for the entire region of the current imaging range, and trades off the imaging parameters to make the entire region images uniform. , And makes the display effect of the image of the whole area the best, but this technology may not be the best for the image in the area of interest, and it cannot highlight the features in the area of interest.
  • medical ultrasound two-dimensional images have been widely used in the examination of the abdomen, heart, small organs, blood vessels, and obstetrics and gynecology, etc., and provide an important diagnostic basis for structural lesions of organs.
  • the structural differences of many lesions are very subtle, especially small structural lesions such as small lesions and small vascular calcifications are often not easy to be identified on traditional ultrasound two-dimensional images, so there are still many clinical diagnosis. Difficulties and challenges.
  • An embodiment provides an imaging method, including the following steps:
  • Imaging the region of interest based on the first imaging parameter or processing the image obtained by scanning the region of interest based on the display parameter to obtain a first imaging image
  • An embodiment provides an imaging method, including the following steps:
  • An embodiment provides an ultrasound imaging device, including:
  • An ultrasound probe for transmitting ultrasound waves to the object to be imaged to scan the object to be imaged, receiving ultrasound echoes returned from the object to be imaged, and converting the received ultrasound echoes into electrical signals;
  • An echo processing module the echo processing module is used to obtain an ultrasonic echo signal according to the electrical signal;
  • a processor for obtaining an imaging image of the object to be imaged according to the ultrasound echo signal
  • a display the display is used to display the imaging image of the object to be imaged
  • the processor is also used for:
  • An embodiment provides a computer-readable storage medium, including a program, which can be executed by a processor to implement the method as described above.
  • the imaging method and the ultrasound imaging device of the above embodiment after acquiring the region of interest from the initial image, the category and/or characteristics of the tissue structure of interest are further determined in the region of interest; An imaging parameter or a display parameter, and then a first imaging image is obtained through the first imaging parameter or the display parameter; all areas of the object to be imaged are imaged based on the second imaging parameter to obtain a second imaging image, wherein the first The imaging parameters and the second imaging parameters are at least partially different; fusing the first imaging image and the second imaging image to obtain an imaging image of the object to be imaged. Since the first imaging parameter or display parameter is related to the category and/or characteristics of the tissue structure of interest, compared with the initial image, the imaged image obtained after fusion can display the tissue structure of interest better, and the image effect is good .
  • FIG. 1 is a structural block diagram of a medical imaging device provided by an embodiment of the present invention.
  • FIG. 2 is a structural block diagram of an ultrasound imaging device provided by an embodiment of the present invention.
  • FIG. 3 is a flowchart of an imaging method provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a scanning method for obtaining a first imaging image provided by an embodiment of the present invention
  • FIG. 5 is a schematic diagram of another scanning method for obtaining a first imaging image provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of first imaging parameter optimization provided by an embodiment of the present invention.
  • FIG. 7 is another schematic diagram of first imaging parameter optimization provided by an embodiment of the present invention.
  • FIG 9 is another flowchart of an imaging method provided by an embodiment of the present invention.
  • connection and “connection” in this application, unless otherwise specified, include direct and indirect connection (connection).
  • the medical imaging apparatus includes a scanning device 10, a processor 20 and a human-computer interaction device 30.
  • the human-machine interaction device 30 is used to receive user input and output visual information.
  • a touch screen can be used, which can not only receive user input commands, but also display visual information; it can also use a mouse, keyboard, trackball, joystick, etc. as the input device of the human-machine interaction device 30 to receive user input.
  • the display is used as the display device of the human-machine interaction device 30 to display the visual information.
  • the scanning device 10 is used to scan an object to be imaged to obtain image data of the object to be imaged.
  • the processor 20 is used for acquiring an initial image of the object to be imaged; acquiring at least one region of interest of the object to be imaged based on the initial image; determining the category and/or characteristics of the tissue structure of interest in the region of interest based on the region of interest;
  • the first imaging parameter or display parameter is obtained based on the category and/or characteristics of the tissue structure of interest;
  • the region of interest is imaged based on the first imaging parameter, or the image obtained by scanning the region of interest is processed based on the display parameter to obtain the first imaging Image; based on the second imaging parameter to image the entire area of the object to be imaged to obtain a second imaging image, wherein the first imaging parameter and the second imaging parameter are at least partially different; the first imaging image and the second imaging image are fused To obtain the imaging image of the object to be imaged.
  • the first imaging parameter or display parameter is related to the category and/or characteristics of the tissue structure of interest, compared with the initial image, the imaged image obtained after fusion can display the tissue structure of interest better, and the image effect is good .
  • the final imaging image is obtained by fusing the first imaging image and the second imaging image. Since the second imaging image is obtained by imaging all the regions, the portion of the imaging area within the region of interest and the area outside the region of interest are improved. Transition effect between parts.
  • "and/or" includes three cases, taking the category and/or characteristics of the organizational structure of interest as an example, one case is the category and characteristics of the organizational structure of interest, and the other is the sense
  • the category of interest organization structure another case is the characteristic of interest organization structure.
  • the category of tissue structure of interest refers to the category of the main tissue structure contained in the region of interest in the current image, which can be the same as the category of the object to be imaged, such as heart, kidney, obstetric cerebellum, etc.; it can also be the object to be imaged Tissue structure, such as tumors, fluid cysts, calcification points, polyps, muscle fibers, fat and other finer tissue structures.
  • the type of tissue structure of interest may be one or more, for example, there is only a tumor, or the tumor and calcification point coexist.
  • the characteristics of the organizational structure of interest refer to the corresponding physical and mathematical statistical characteristics of the organizational structure.
  • the physical property may be the softness and hardness of the tissue structure, such as the softness and hardness of the tissue structure measured by elastography
  • the mathematical statistical property may be one of the shape, length, width, area, number, and average brightness corresponding to the tissue structure One or more.
  • the present invention can be applied to various medical imaging systems, such as ultrasound imaging systems, X-ray imaging systems, nuclear magnetic resonance imaging (MRI) systems, positron emission computed tomography (PET) systems, or single photon emission computed tomography (SPECT) Systems, etc.; that is, the medical imaging device of the present invention may be an ultrasound imaging device, an X-ray imaging device, a nuclear magnetic resonance device, a positron emission computed tomography imaging device, a single photon emission computed tomography imaging device, and the like.
  • the scanning device 10 may scan the object to be imaged to obtain image data of the object to be imaged.
  • the scanning device 10 includes a probe, a transmission/reception control circuit, and an echo processing module.
  • the scanning device 10 is its corresponding device for scanning the object to be imaged.
  • the processor 20 can control the scanning device 10 or the imaging system to implement the imaging method of the embodiment of the present invention described in detail below.
  • image data may also include the unprocessed or received certain processing received or obtained after scanning by the scanning device 10, but There is no data at the time of image formation.
  • the image data here also includes ultrasound echo data obtained after the ultrasound echo received by the probe, radio frequency data after certain processing, or image data after forming an ultrasound image.
  • the present invention uses an ultrasound imaging system as an example to describe the embodiments of the present invention, that is, the present invention uses an ultrasound imaging device as an example.
  • the ultrasound imaging apparatus includes an ultrasound probe 110, a transmission/reception control circuit 120, an echo processing module 130, a processor 20, and a man-machine interaction device 30.
  • the man-machine interaction device 30 includes a display 310.
  • the transmission/reception control circuit 120 transmits the delayed-focused ultrasound pulses with a certain amplitude and polarity to the ultrasound probe 110.
  • the ultrasound probe 110 is excited by the ultrasound pulse, transmits ultrasound waves to the object to be imaged, receives ultrasound echoes with tissue information reflected from the object to be imaged after a certain delay, and converts the ultrasound echoes back into electrical signals.
  • the echo processing module 130 receives the electrical signal generated by the conversion of the ultrasonic probe 110, obtains an ultrasonic echo signal, and performs processing such as filtering, amplification, and beam synthesis on the ultrasonic echo signal, and then sends it to the processor 20 for related processing to obtain a pending The imaging image of the imaging object.
  • the echo processing module 130 includes, for example, a beam synthesis module.
  • the ultrasound image obtained by the processor 20 is sent to the display 310 for display.
  • the imaging image obtained based on the ultrasound imaging device mainly refers to the ultrasound image.
  • the processor 20 may also implement the imaging method provided by the embodiment of the present invention.
  • the following describes the ultrasonic imaging system as an example in detail with reference to the drawings.
  • FIG. 3 shows a flowchart of an imaging method provided by an embodiment of the present invention, including the following steps:
  • Step 1 The processor 20 acquires the initial image of the object to be imaged.
  • an imaging system eg, an ultrasound imaging system
  • the "full area ultrasound image” mentioned here may mean that the ultrasound image includes all areas of the object to be imaged.
  • the "object to be imaged” referred to herein may be one or more organs or areas of a human body or animal currently or to be subjected to ultrasound scanning.
  • the initial image can also be externally input.
  • the processor 20 acquires at least one region of interest of the object to be imaged based on the initial image.
  • the area of interest may be any area of interest to the user (for example, a doctor or an operator of other ultrasound imaging equipment, etc.) in the object to be imaged, such as an area suspected of having a small structural lesion, etc.
  • Structural information can be used as a basis for clinical diagnosis.
  • the manner of acquiring the region of interest includes, but is not limited to, three ways: an operator manually specifies a way, a semi-automatic way, and an automatic way. Among them, the automatic or semi-automatic way intelligently determines the region of interest in the initial image by identifying the content of the initial image. The following describes these three methods one by one.
  • the human-machine interaction interface of the ultrasound imaging device displays the initial image of the object to be imaged as described above
  • the human-machine interaction device 30 includes an input device, such as a trackball, which is displayed on the initial image of the object to be imaged by operating the trackball
  • the sampling frame operates to change the position of the center point of the sampling frame and/or the size of the sampling frame, and the area within the sampling frame is the region of interest.
  • Semi-automatic mode This mode is a combination of manual operation by the operator and image recognition technology.
  • the process may be: the processor 20 obtains the image type of the initial image of the object to be imaged specified by the operator, and the image to be imaged is based on the image type The initial image of the object is matched with the corresponding first sample template image to obtain the region of interest.
  • the image type indicates which type of image the initial image of the current object belongs to, such as liver image, kidney image, heart image, obstetric cerebellar image, etc.
  • the operator can determine the initial image to be imaged by the operator What is the target of interest in the image, and the target of interest is the above-mentioned region of interest.
  • This inspection mode can be used to indicate the image type of the initial image of the object to be imaged.
  • the initial image of the object to be imaged can be matched with the corresponding first sample template image based on the image type to obtain one or more regions of interest.
  • the corresponding first sample template image may be a sample image of the same image type as the initial image of the object to be imaged, and the sample image may be obtained offline or created by collecting multiple samples of the same image type through an ultrasound imaging device.
  • the template image of each sample is used as a matching reference to match the initial image of the object to be imaged to obtain one or more regions of interest.
  • matching the initial image of the object to be imaged with the corresponding first sample template image to obtain one or more regions of interest may be: traversing the initial image of the object to be imaged In the process, an area block with the same size as the sample template image centered on the position of the current traversal is selected, and the similarity calculation between the selected area block and the first sample template image is performed, and the optimal similarity is selected after the end of the traversal
  • the center point of the regional block is the best matching position, and then the region of interest is delineated with the best matching position as the center.
  • the similarity calculation method can adopt the SAD method (Sum of Absolute Differences) and the correlation coefficient method or Other suitable methods.
  • This mode can determine the region of interest through image recognition technology.
  • the manner of determining the region of interest through the image recognition method may include but is not limited to the following two ways:
  • One way is to perform feature extraction on the initial image of the object to be imaged to obtain the characteristics of the initial image of the object to be imaged, and match the characteristics of the initial image of the object to be imaged with the characteristics of the second sample image to obtain the initial image of the object to be imaged
  • the initial image of the object to be imaged is matched with the corresponding first sample template image based on the image type, and the process of obtaining the region of interest can refer to the specific implementation in the above semi-automatic mode. Elaborate again.
  • the process of obtaining the image type based on feature matching can be regarded as a process of automatically determining the image type.
  • the process of automatically determining the image type can further refine the image type to which the initial image of the imaging object belongs. Determine which type of image the initial image of the object to be image belongs to, such as which type of image belongs to obstetrics or heart.
  • at least one second sample image can be obtained offline for each refined image type or collected by an ultrasound imaging device, and the image type of each second sample image It is known that the refined image type of the initial image of the object to be imaged can be determined by matching the characteristics of the second sample image.
  • the matching process can be as follows:
  • Step 21 Feature extraction; wherein the above feature may refer to a general term that can distinguish various attributes of the initial image of the object to be imaged from other images.
  • any second sample image collected will perform feature extraction on the second sample image to use the characteristics of the second sample image as a reference feature to facilitate subsequent matching of the initial image of the object to be imaged .
  • feature extraction may be performed on the initial image of the object to be imaged in the same feature extraction manner as the second sample image to obtain the characteristics of the initial image of the object to be imaged.
  • feature extraction methods can use image processing to extract features, such as Sobel operator, Canny operator, Roberts operator, and SIFT operator, etc.; or machine learning methods can be used to automatically extract image features, such as PCA (Principal Component Analysis) , Principal component analysis), LDA (Linear Discriminant Analysis), and deep learning and other methods to automatically extract image features.
  • machine learning methods can be: CNN (Convolutional Neural Network), ResNet (Residual Network), VGG (Visual Geometry Group) and so on.
  • Step 22 Feature matching; after obtaining the features of the initial image of the object to be imaged, the similarity calculation can be performed with the features of the second sample image in the training sample library one by one, and the image type of the second sample image with the most similar features is selected as
  • the image type of the initial image of the object to be imaged where the feature similarity measurement method can be the SAD algorithm, that is, the sum of the absolute values of the difference between the two groups of features is calculated, the smaller the SAD value, the more similar; or the two groups of features can also be calculated To measure the similarity of two sets of features. The larger the correlation coefficient, the more similar it is; or other suitable methods can also be used.
  • the feature matching process is: input the image into the network trained by the deep learning method such as CNN in step 21 to directly determine the image category.
  • Step 23 Automatically define one or more regions of interest. After the image type is automatically determined, the method of obtaining one or more regions of interest based on the image type is the same as the above-mentioned semi-automatic method, which will not be repeated here.
  • the method of image recognition technology introduced above to determine the region of interest is applicable to various image types.
  • the motion area in the initial image of the object to be imaged may be the area of interest. Therefore, in the case where the image type of the initial image of the object to be imaged indicates that the object to be imaged is an object that periodically moves in the time dimension, the process of determining the region of interest through the image recognition technique may be as follows (another way):
  • Step 21' Obtain the motion feature of the initial image of the object to be imaged; the motion feature can be obtained by various methods, such as the frame difference method, specifically, the image information of the current frame can be directly subtracted from the previous frame or The image information of the previous frames is used to extract the motion features of the current frame.
  • the frame difference method specifically, the image information of the current frame can be directly subtracted from the previous frame or The image information of the previous frames is used to extract the motion features of the current frame.
  • OF Optical Flow
  • GMM Global Mixture Model
  • the points in the motion area have a large difference between the current frame image and the previous frame (previous frames), the absolute value of the value obtained by the frame difference method is large, and the other The difference between each point in the current frame image and the previous frame (previous frames) is small, and the absolute value of the corresponding value obtained under the frame difference method is small, for example, close to 0.
  • Step 22' Segment the initial image of the object to be imaged based on the motion feature to obtain the motion area in the initial image of the object to be imaged; after obtaining the motion feature, threshold segmentation combined with morphological processing can be used to segment the motion area.
  • Step 23' One or more regions of interest are determined based on the motion region; after segmenting the motion region, the motion region can be used to locate the region of interest.
  • the region of interest in the embodiments of the present invention may be rectangular (for example, in the case of an ultrasound imaging system using a linear array probe) or fan-shaped (for example, when the imaging coefficient is a convex array or phased array probe
  • one or more regions of interest localization method can fit one or more regular regions of interest based on the obtained motion regions, so that it can contain each motion region separately
  • the fitting method can be to calculate the circumscribed rectangle or sector shape of the motion area, or the least square estimation rectangle fitting, or other suitable fitting methods.
  • the above method of locating the region of interest is also suitable for the semi-automatic method.
  • a semi-automatic method is to narrow the positioning range based on the operator's input, and then use the automatic positioning method to locate the final region of interest within the reduced range.
  • the purpose of narrowing the positioning range is to improve positioning efficiency and accuracy, and the method of narrowing the positioning range may be: the operator draws at least one set of points on the motion area to prompt the range of the area of interest, or automatically according to the operator's input information Narrow the scope of positioning.
  • Another semi-automatic way is for the operator to draw a group of points or groups of points on the motion area to locate one or more initial areas of interest.
  • the above automatic positioning or semi-automatic positioning is used according to the image content The method changes the position and size of the frame of interest in real time.
  • the above method of locating the region of interest can locate the initial image of each object to be imaged in real time to change the region of interest in real time, or it can be positioned at intervals, or the operator can even press a button Positioning after triggering in other ways. And even for systems that need to monitor the area of interest in real time, the location of the area of interest can be real-time, and the image type acquisition method can be judged at intervals or after the image type acquisition is triggered, and the above image type The acquisition process can be specified by the operator or based on feature matching.
  • Step 3 The processor obtains the category and/or characteristics of the tissue structure of interest in the region of interest based on the region of interest.
  • the image type in step 2 can be directly used as the category of the tissue structure of interest, of course, other methods can also be used, for example, in a specific embodiment, the manner of obtaining the category and/or characteristics of the tissue structure of interest includes but is not limited to three Mode: The operator manually specifies the mode, semi-automatic mode and automatic mode.
  • the processor 20 obtains, through the human-machine interaction device 30, the preset categories and/or characteristics selected by the operator for the organization of interest using the human-machine interaction interface. That is, the human-computer interaction interface displays the preset organizational structure types and corresponding characteristics corresponding to each region of interest, which can be selected by the operator. For the feature, in addition to selection, it can also be determined according to the information input by the operator. In this way, the preset categories and/or characteristics selected by the operator for the organizational structure of interest using the human-computer interaction interface are determined as the categories and/or characteristics of the organizational structure of interest.
  • Semi-automatic mode This mode is a combination of manual operation by the operator and image recognition technology.
  • the process may be: the processor 20 obtains the point set by the operator using the human-computer interaction interface (manual), and determines image recognition according to the point
  • the scope and the category and/or characteristics of the tissue structure of interest are obtained by the following automatic methods, for example, the category and/or characteristics of the tissue structure of interest are determined based on the region of interest through an image recognition method (automatic).
  • step 2 the area of interest is obtained based on a set of points or multiple sets of points selected by the operator, and the type and corresponding characteristics of the tissue structure of interest in the area of interest are automatically identified;
  • the group or points of points input by the operator in each area of interest are re-received, and the category and/or characteristics of the organizational structure in the area of interest are automatically identified based on the points input by the operator.
  • the input and information of the operator narrows the range of type and/or characteristic recognition and detection, and the processing speed is improved.
  • the above-mentioned semi-automatic method may also be one or more sets of points input by the operator to determine one or more initial tissue structure categories and/or characteristics for each region of interest.
  • the following automatic methods are used to update the categories and/or characteristics of the identified organizational structure in the area of interest in real time.
  • This method can determine the category and/or characteristics of the tissue structure of interest based on the region of interest through the image recognition method.
  • the semi-automatic mode and the automatic mode there are two ways to determine the category and/or characteristics of the tissue structure of interest based on the region of interest through the image recognition method, and the following two methods are introduced one by one.
  • the first type extracting the features of the interest area of the initial image to obtain the features of the interest area of the initial image, matching the features of the interest area of the initial image with the features of the corresponding first sample image to obtain the interest area Categories and/or characteristics of the organization of interest.
  • the corresponding first sample image may be a sample image of a tissue structure having the same category and/or characteristics as the region of interest of the initial image, and the sample image may be obtained offline or the same category and/or characteristics are acquired by an ultrasound imaging device
  • the images of multiple samples created after multiple samples of the tissue structure are matched with their characteristics as the matching reference to the features of the interest area of the initial image to obtain the category and/or characteristics of the tissue structure of interest.
  • Multiple regions of interest may involve different types and/or characteristics of organizational structures, and each first sample image that is feature-matched to each region of interest corresponds to an organizational structure with different categories and/or characteristics.
  • the above process of obtaining the category and/or characteristics of the organizational structure based on feature matching can be regarded as an automatic acquisition process, and the automatic acquisition process can further refine the categories and/or characteristics of the organizational structure relative to the manner specified by the operator To determine what organizational structure and characteristics the organization of interest belongs to.
  • the automatic acquisition process can further refine the categories and/or characteristics of the organizational structure relative to the manner specified by the operator To determine what organizational structure and characteristics the organization of interest belongs to.
  • at least one first sample image can be obtained offline for each tissue structure type and/or characteristic or acquired by an ultrasound imaging device, and each first The tissue structure category and/or characteristics of the sample image are known. Therefore, by matching the characteristics of the first sample image, the tissue structure type and/or characteristics of the region of interest can be determined.
  • the matching process can be as follows:
  • Step 31 Feature extraction; in some embodiments of the present invention, a training sample library is established in advance, and any one of the first sample images is obtained, feature extraction will be performed on the first sample image to use the features of the first sample image as
  • the reference feature facilitates the matching of the interest region of the subsequent initial image; the first sample image and its corresponding features, categories and/or characteristics are stored in the training sample library.
  • the feature in step 3 may refer to a general term for various attributes that can characterize the region of interest of the initial image different from other images or other regions of the initial image.
  • the feature extraction method may be performed on the interest region of the initial image in the same feature extraction manner as the first sample image to obtain the feature of the interest region of the initial image.
  • feature extraction methods can use image processing to extract features, such as Sobel operator, Canny operator, Roberts operator and SIFT operator, etc.; or machine learning methods can be used to automatically extract the features of the region of interest, such as PCA (Principal Component Analysis, principal component analysis), LDA (Linear Discriminant Analysis), and deep learning methods automatically extract image features.
  • machine learning methods can be: CNN (Convolutional Neural Network), ResNet (Residual Network), VGG (Visual Geometry Group) and so on.
  • Step 32 Feature matching; after obtaining the features of the interest region of the initial image, the similarity calculation can be performed with the features of the first sample image in the training sample library one by one, and the category and category of the first sample image with the most similar features are selected /Or characteristic is the category and/or characteristic of the organizational structure of interest in the region of interest, where the feature similarity measurement method can be the SAD algorithm, that is, the sum of the absolute values of the difference between the two groups of features is calculated, the smaller the SAD value, the more Similarity; or the correlation coefficient of two sets of features can be calculated to measure the similarity of the two sets of features. The larger the correlation coefficient is, the more similar it is;
  • the feature matching process is: input the image into the network trained by the deep learning method such as CNN in step 31 to directly obtain the category and/or characteristics of the region of interest.
  • the association between the image type of the initial image and the first sample image can be established in advance, that is, one type of initial image corresponds to a part of the first sample image .
  • the image type of the initial image is obtained, and the first sample image that needs to be further feature-matched with the interest region of the initial image is determined according to the image type, and then determined from the training sample library
  • the features of the first sample image are calculated one by one, and the category and/or characteristics of the first sample image with the most similar features are selected as the category and/or characteristics of the tissue structure of interest in the region of interest.
  • the image type of the initial image is obtained, specifically, the image type obtained in step 2 may be directly used, or the image type of the initial image may be obtained again in the manner of step 2.
  • the method of determining the category and/or characteristics of the tissue structure of interest introduced by the image recognition technology described above is applicable to the categories and/or characteristics of various tissue structures.
  • the category and/or characteristics of the tissue structure in the region of interest indicate that the region of interest is periodic in the time dimension
  • the process of determining the category and/or characteristics of the tissue of interest through image recognition technology can be as follows (another way):
  • Step 31' Obtain the motion features of the initial image interest area; the motion features can be obtained by various methods, such as the frame difference method, specifically, the image information of the current frame can be directly subtracted from the previous frame or before Several frames of image information are used to extract the motion features of the current frame.
  • the frame difference method specifically, the image information of the current frame can be directly subtracted from the previous frame or before Several frames of image information are used to extract the motion features of the current frame.
  • OF Optical Flow
  • GMM Global Mixture Model
  • the points in the motion area have a large difference between the current frame image and the previous frame (previous frames), the absolute value of the value obtained by the frame difference method is large, and the other The difference between each point in the current frame image and the previous frame (previous frames) is small, and the absolute value of the corresponding value obtained under the frame difference method is small, for example, close to 0.
  • Step 32' Determine the category and/or characteristics of the tissue structure of interest in the region of interest based on the motion features.
  • the method of acquiring the categories and/or characteristics of the organization of interest may be performed at intervals, or in real time, or after triggering the category and/or characteristics acquisition instruction.
  • steps 2 and 3 may also be performed in the following manner:
  • the deep learning method is used to learn the category and/or characteristics of at least one region of interest and its organizational structure of interest. For example, use the convolutional neural network to perform deep learning on the second sample image in the training sample library to obtain the feature of the second sample image, which corresponds to the image type of the initial image; use the convolutional neural network to train the Perform deep learning on the first sample template image to obtain the feature of the first sample template image, which corresponds to the region of interest; use the convolutional neural network to perform deep learning on the first sample image in the training sample library to obtain the first A feature of an image that corresponds to the category and/or characteristic of the tissue structure of interest.
  • the first is a sliding window-based method, specifically: first, feature extraction is performed on the area within the sliding window of the input image.
  • the method of extracting features is the same as that described in step 21 and step S31, so it is not described in detail, and then extracted
  • the features are classified with discriminators such as SVM (Support Vector Machine) and/or random forest to determine whether the current sliding window is a region of interest, and if so, the corresponding category and characteristics of the organizational structure within the region of interest.
  • SVM Small Vector Machine
  • the second method is the detection and recognition of the Bounding-Box (border regression) method based on deep learning.
  • the common form is: for the input image, the feature learning and parameter regression are performed by stacking the base layer convolution layer and the fully connected layer.
  • An input image such as the initial image, can directly return the Bounding-Box of the corresponding region of interest through the network, and at the same time determine the category and characteristics of the organizational structure in the region of interest.
  • Common networks include R-CNN (Region-CNN ), Fast R-CNN, Faster-RCNN, SSD, YoLo, etc.
  • the third method is an end-to-end semantic segmentation network method based on deep learning. This method is similar to the structure of the second deep learning-based Bounding-Box.
  • the difference is that the fully connected layer is removed, and upsampling or rewinding is added. Layering to make the size of the input and output the same, so as to directly get the region of interest of the initial image and the corresponding categories and characteristics of the tissue structure within the region of interest. For example, by stacking base-level convolutional layers for feature learning and parameter regression, through upsampling or deconvolution layers to make the input and output the same size, for an input image, such as the initial image, you can directly regression through the network The corresponding Bounding-Box of the region of interest is found, and the category and characteristics of the organizational structure in the region of interest are determined. Common networks include FCN, U-Net, Mask R-CNN, etc.
  • Step 4 The processor obtains the first imaging parameter based on the category of the tissue structure of interest, the characteristics based on the tissue structure of interest, or the information based on the combination of the categories and characteristics of the tissue structure of interest. For example, matching the category and/or characteristics of the tissue structure of interest with a preset category-parameter correspondence table and/or characteristic-parameter correspondence table to obtain the corresponding first imaging parameter.
  • the optimal imaging parameter is automatically iterated as the first imaging parameter.
  • the method of automatic iteration can be to construct the objective function first, and then use the gradient descent method, Newton method and other optimization methods to automatically iterate the optimal imaging parameters.
  • Step 5 The processor images the region of interest based on the first imaging parameter to obtain the first imaging image. Since there may be multiple interest regions acquired; in step 5, based on each first imaging parameter, a corresponding interest region on the object to be imaged may be imaged by one scan to obtain a first imaging image of each interest region. For example, as shown in FIG. 4, during one scan, different regions of interest on the object to be imaged are scanned with different first imaging parameters (such as different emission voltages, different linear densities, etc.) to obtain the first One imaging image.
  • first imaging parameters such as different emission voltages, different linear densities, etc.
  • the first region of interest and the second region of interest on the object to be imaged are scanned using the same scanning parameters, but different signal/image processing is used for the first region of interest and the second region of interest
  • the parameters (such as different contrasts) are processed so that the first imaging image of each region of interest can be obtained by one scan.
  • the first imaging image of each region of interest may be obtained by imaging the corresponding region of interest on the object to be imaged through multiple scans based on each first imaging parameter.
  • the "single scan” and “multiple scans” here not only refer to the front-end emission scanning step of scanning the object to be imaged by transmitting ultrasound, but also include the back-end signal/image processing step of imaging based on the ultrasonic echo signal.
  • Step 6 The processor images all regions of the object to be imaged based on the second imaging parameter to obtain a second imaging image.
  • the “all area” as the scan target for imaging using the second imaging parameter is the entire area of the current object to be imaged containing the aforementioned region of interest, that is, the area (or It is said that the imaging area at this time) includes the area of interest itself in addition to the area other than the aforementioned area of interest. Therefore, accordingly, the obtained second imaging image is an image of the entire region containing the region of interest of the current object to be imaged, not just an image of a region other than the region of interest.
  • first imaging parameter and the second imaging parameter are at least partially different, and may be: the first imaging parameter and the second imaging parameter are the same type of parameters, and the values of the first imaging parameter and the second imaging parameter are different; or the first The imaging parameter and the second imaging parameter are different types of parameters, for example, the first imaging parameter includes parameter A and parameter B, and the second imaging parameter may include parameter C and parameter D; or the first imaging parameter includes the second imaging parameter, If the first imaging parameter includes parameter A and parameter B, and the second imaging parameter includes parameter A, it is determined that the first imaging parameter includes the second imaging parameter; and so on.
  • the first imaging parameter and the second imaging parameter may include scanning parameters and signal/image processing parameters.
  • the first imaging parameter and the second imaging parameter may be at least one of emission frequency, emission voltage, line density, number of focal points, focal position, speckle noise suppression parameter, and image enhancement parameter.
  • the secondary imaging mode of the entire region and the region of interest is adopted. Therefore, during the imaging process of the region of interest, the first imaging parameter may be optimized according to the category and/or characteristics of the tissue structure of interest to Optimize the region of interest.
  • optimizing the emission frequency in the region of interest can make the region of interest not limited by the emission frequency in the imaging process of all regions.
  • the region of interest can be improved during scanning
  • the emission frequency in the imaging process thereby improving the resolution of the first imaging image
  • the emission frequency of the region of interest in the scanning imaging process can be reduced, thereby increasing the first imaging image Penetration.
  • the transmission voltage can be optimized, such as full-area scanning imaging A lower emission voltage is used, and a higher emission voltage is used for scanning imaging of the region of interest, as shown in Figure 6, thereby improving the image quality in the region of interest when the transmission power meets the ultrasound system's sound field limitations
  • one of the ultrasound system sound field limit indicators Ispta spatial peak time average sound intensity is less than or equal to 480 mW/cm 2 .
  • the linear density, the number of focal points and the scanning frame rate of the ultrasound system are mutually restricted.
  • the imaging image quality is shown in FIG. 7 or FIG. 8, where the horizontal axis in FIGS. 6 to 8 is the position of the probe in the ultrasound system.
  • the present invention can realize adaptive local enhancement imaging, and can optimize speckle noise suppression parameters and image enhancement parameters in the region of interest according to the category and characteristics of the tissue structure in the local region of interest. Since the local area of interest is usually smaller than the entire area, more complex algorithms and more effective parameters can be applied in the case of limited computing power, thereby improving the image quality of the image in the area of interest.
  • the adaptive local enhancement imaging method is not limited to the parameter optimization described above, but also includes optimizing various other transmission, reception, and post-processing image parameters, such as transmission aperture, transmission waveform, spatial recombination, frequency recombination, line recombination, and frame Correlation, etc., by optimizing the first imaging parameter corresponding to the local region of interest, to obtain a better first imaging image.
  • image parameters such as transmission aperture, transmission waveform, spatial recombination, frequency recombination, line recombination, and frame Correlation, etc.
  • the aforementioned first parameter and second parameter may be set accordingly.
  • Step 7 The processor fuses the first imaging image and the second imaging image to obtain an imaging image of the object to be imaged.
  • the fusion process may be: acquiring the first fusion parameter of the first imaging image and the second fusion parameter of the second imaging image, and then based on the first fusion parameter and the second fusion parameter to the first imaging image and the second fusion parameter
  • the two imaging images are fused to obtain the imaging image of the object to be imaged and displayed on the display.
  • the region of interest has very good image effects due to the optimization of imaging parameters and display parameters .
  • the fusion can be based on the following formula:
  • (x, y) represents the position of each pixel in the ith region of interest (ROI, region of interest), N is an integer greater than or equal to 1, Is the ith region of interest corresponding to the first imaging image, Is the i-th region of interest corresponding to the second imaging image, marked with 1-N to show the distinction; (That is, ⁇ 1 , ⁇ 2 ... ⁇ N ) are the first fusion parameters of the region of interest corresponding to each first imaging image, and the first fusion parameters of each first imaging image may be the same or different; (Ie, ⁇ 1 , ⁇ 2 ...
  • ⁇ N is the second fusion parameter corresponding to the i-th region of interest of the second imaging image
  • I o is the imaging image of the object to be imaged, that is, the fusion result.
  • the image corresponding to the fusion result depends on the values of the first fusion parameter and the second fusion parameter.
  • the first fusion parameter and the second fusion parameter can be set according to actual conditions. In some embodiments, you can take Among them, A is generally 1, but may also be other values close to 1, for example, when A>1, then the overall brightness level of the image output after fusion may be increased. In other embodiments, it can also be set in other ways.
  • the values of the first fusion parameter ⁇ and the second fusion parameter ⁇ are not fixed, and they can be different according to the pixels in the image, the positions in the image, and the generation time of the image. Any value or the sum of all values of ⁇ ( ⁇ 1 , ⁇ 2 ... ⁇ N ) and ⁇ ( ⁇ 1 , ⁇ 2 ... ⁇ N ) may also be equal to 1 or 0.
  • the values of the first fusion parameter and the second fusion parameter may be not less than 0; If the gray value of each pixel in the first imaging image or the second imaging image is less than 0, the value of the corresponding fusion parameter may be less than 0, and when the values of the first fusion parameter and the second fusion parameter are different, it is 0; or during the fusion process, the first fusion parameter ⁇ corresponding to each position in the first imaging image is different, and the second fusion parameter ⁇ corresponding to each position in the second imaging image may also be different, for example, the edge position of the region of interest needs If more image information of the second imaging image is fused, the value of the second fusion parameter ⁇ at the edge position of the region of interest may be greater than the value of the second fusion parameter ⁇ at other positions.
  • the image information of the first imaging image is fused more in other positions of, the value of the first fusion parameter ⁇ at other positions is greater than the value of the first fusion parameter ⁇ at the edge position; if the obtained first imaging image and the first
  • the second imaging image is a real-time image, which may change with time, and the values of the first fusion parameter ⁇ and the second fusion parameter ⁇ may also vary with time; and so on.
  • the imaging method provided in this embodiment of the present invention can separately image the region of interest of the object to be imaged and the second imaging parameter to the entire region of the object to be imaged, to obtain the region of interest
  • the first imaging image and the second imaging image of the entire area so that the first imaging parameter can be set specifically for the category and/or characteristics of the tissue structure of interest to carry out the desired aspects of the image of the tissue structure of interest Targeted enhancement and optimization; in the process of fusing the first imaging image and the second imaging image, more image information of the second imaging image can be fused at the edge position of the first imaging image, so that the sense of peace within the region of interest
  • the smooth transition between the areas of interest improves the transition effect, which in turn makes the overall effect of the fused imaging image maintain visual consistency.
  • the first imaging parameter and the second imaging parameter are different, so that during the fusion process, the first imaging image can use the image information of the region corresponding to the region of interest in the second imaging image to enhance the image quality of the region of interest.
  • the above second imaging image is an image corresponding to all areas of the object to be imaged, and its image shape is a conventional shape, which is relatively simple in parameter control of the second imaging parameter compared to the unconventional shape, further because the second imaging image It is the image corresponding to all areas, so that the images outside the area of interest are displayed in real time in addition to the area of interest, realizing the real-time display of all areas.
  • the first imaging image may be obtained by imaging the region of interest based only on the first imaging parameter, or on this basis, further image processing may be performed, for example, based on the type of tissue structure of interest, characteristics based on the structure of interest, or
  • the display parameters are obtained based on the information of the combination of the category and characteristics of the tissue structure of interest; the region of interest is imaged based on the first imaging parameter, and the image obtained by the region of interest based on the first imaging parameter is processed based on the display parameter to obtain the first Imaging images.
  • the first imaging image may process the image obtained by scanning the region of interest based only on the display parameters to obtain the first imaging image.
  • the region of interest includes the background and structure of the tissue of interest; the display parameters are: clarity, contrast of the tissue of interest, color of the tissue of interest, contrast of the boundary of the tissue of interest, the boundary of the tissue of interest At least one of color, contrast of tissue structure background of interest, and color of tissue structure background of interest.
  • the information based on the category of the organization structure of interest, the characteristics based on the organization structure of interest, or the combination of the category and characteristics of the organization structure of interest is used to obtain the display parameters, including: combining the category and/or characteristics of the organization structure of interest with the The preset category-parameter correspondence table and/or characteristic-parameter correspondence table are matched to obtain the corresponding display parameters; or, according to the category and/or characteristic of the organizational structure of interest in each region of interest, the optimal is automatically iterated Display parameters.
  • the display parameters can be used to form the first imaging image (as described above), and in some embodiments, the display parameters can also be used to form the imaging image, for example, the first imaging image and the second imaging image are fused , Processing the region of interest on the fused image based on the display parameters to obtain an imaging image of the object to be imaged.
  • the method of enhanced imaging of the region of interest is to re-imaging the region of interest in addition to calling the first imaging parameters differently according to the type and/or characteristics of the tissue structure.
  • the display mode of the tissue structure in the region of interest is changed for re-imaging.
  • the embodiment shown in FIG. 9 is the same as the embodiment shown in FIG. 3 except that the method for obtaining the first imaging image (step 4', step 5') is different from the embodiment shown in FIG.
  • step 4' the processor obtains display parameters based on the category and/or characteristics of the tissue structure of interest.
  • step 5' the processor processes the image obtained by imaging the region of interest based on the display parameters to obtain a first imaging image.
  • some or all important tissue structures of the region of interest may be highlighted or displayed in different colors according to the category and/or characteristics of the identified tissue structures in the region of interest, for example, for the identified calcification points Highlight it and change the area that is not a calcification point to blue. It is also possible to draw corresponding contour edges for important tissue structures, for example to draw the boundary of the mass in the region of interest. It is also possible to increase the contrast of the borders of important tissues, for example, using different contrast enhancements according to the size and characteristics of different tumors identified. You can also use different image enhancement algorithms for different regions of interest according to the identified category and/or characteristics. For example, for region of interest 1, the image contrast is not high, and the histogram equalization is used to improve the clarity of the image. For the region of interest 2, when more noise is detected, bilateral filtering is used for noise reduction.
  • any tangible, non-transitory computer-readable storage medium can be used, including magnetic storage devices (hard disks, floppy disks, etc.), optical storage devices (CD-ROM, DVD, Blu-ray disks, etc.), flash memory, and/or the like .
  • These computer program instructions can be loaded onto a general purpose computer, special purpose computer, or other programmable data processing equipment to form a machine, so that these instructions executed on a computer or other programmable data processing device can generate a device that implements a specified function.
  • Computer program instructions can also be stored in a computer-readable memory, which can instruct the computer or other programmable data processing device to operate in a specific manner, so that the instructions stored in the computer-readable memory can form a piece Manufactured products, including implementation devices that implement specified functions.
  • Computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce a computer-implemented process that allows the computer or other programmable device to execute Instructions can provide steps for implementing specified functions.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

La présente invention concerne un procédé d'imagerie et un dispositif d'imagerie ultrasonore faisant appel au procédé d'imagerie. Le procédé consiste à : après l'acquisition d'une région d'intérêt (2) à partir d'une image initiale, déterminer en outre une catégorie et/ou une caractéristique (3) d'une structure d'organisation d'intérêt dans la région d'intérêt ; obtenir un premier paramètre d'imagerie (4) ou un paramètre d'affichage (4') en fonction de la catégorie et/ou de la caractéristique, de façon à obtenir une première image imagée (5,5') par le premier paramètre d'imagerie ou le paramètre d'affichage ; imager la totalité de la région d'un objet à imager sur la base d'un second paramètre d'imagerie qui est au moins partiellement différent du premier paramètre d'imagerie, de façon à obtenir une seconde image imagée (6) ; et fusionner la première image imagée et la seconde image imagée pour obtenir une image imagée de l'objet à imager (7). Comme le premier paramètre d'imagerie ou paramètre d'affichage est associé à la catégorie et/ou à la caractéristique de la structure d'organisation d'intérêt, par comparaison avec une image initiale, l'image imagée obtenue après fusion peut mieux afficher la structure d'organisation d'intérêt et l'effet d'image est bon.
PCT/CN2018/123946 2018-12-26 2018-12-26 Procédé d'imagerie et dispositif d'imagerie ultrasonore WO2020132953A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880097321.4A CN112654298A (zh) 2018-12-26 2018-12-26 一种成像方法及超声成像设备
PCT/CN2018/123946 WO2020132953A1 (fr) 2018-12-26 2018-12-26 Procédé d'imagerie et dispositif d'imagerie ultrasonore

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/123946 WO2020132953A1 (fr) 2018-12-26 2018-12-26 Procédé d'imagerie et dispositif d'imagerie ultrasonore

Publications (1)

Publication Number Publication Date
WO2020132953A1 true WO2020132953A1 (fr) 2020-07-02

Family

ID=71126797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123946 WO2020132953A1 (fr) 2018-12-26 2018-12-26 Procédé d'imagerie et dispositif d'imagerie ultrasonore

Country Status (2)

Country Link
CN (1) CN112654298A (fr)
WO (1) WO2020132953A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114027872A (zh) * 2021-09-24 2022-02-11 武汉联影医疗科技有限公司 一种超声成像方法、系统及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5315999A (en) * 1993-04-21 1994-05-31 Hewlett-Packard Company Ultrasound imaging system having user preset modes
CN101513353A (zh) * 2009-03-19 2009-08-26 无锡祥生科技有限公司 带有指纹识别器的超声诊断仪中预设值的选择方法
WO2018058632A1 (fr) * 2016-09-30 2018-04-05 深圳迈瑞生物医疗电子股份有限公司 Procédé et système d'imagerie
CN108209970A (zh) * 2016-12-09 2018-06-29 通用电气公司 基于超声成像中组织类型的自动检测的可变声速波束成形
CN108451543A (zh) * 2017-02-17 2018-08-28 郝晓辉 自动化超声成像系统与方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103156636B (zh) * 2011-12-15 2016-05-25 深圳迈瑞生物医疗电子股份有限公司 一种超声成像装置和方法
US20150351726A1 (en) * 2014-06-05 2015-12-10 Siemens Medical Solutions Usa, Inc. User event-based optimization of B-mode ultrasound imaging
US10813595B2 (en) * 2016-12-09 2020-10-27 General Electric Company Fully automated image optimization based on automated organ recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5315999A (en) * 1993-04-21 1994-05-31 Hewlett-Packard Company Ultrasound imaging system having user preset modes
CN101513353A (zh) * 2009-03-19 2009-08-26 无锡祥生科技有限公司 带有指纹识别器的超声诊断仪中预设值的选择方法
WO2018058632A1 (fr) * 2016-09-30 2018-04-05 深圳迈瑞生物医疗电子股份有限公司 Procédé et système d'imagerie
CN108209970A (zh) * 2016-12-09 2018-06-29 通用电气公司 基于超声成像中组织类型的自动检测的可变声速波束成形
CN108451543A (zh) * 2017-02-17 2018-08-28 郝晓辉 自动化超声成像系统与方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114027872A (zh) * 2021-09-24 2022-02-11 武汉联影医疗科技有限公司 一种超声成像方法、系统及计算机可读存储介质

Also Published As

Publication number Publication date
CN112654298A (zh) 2021-04-13

Similar Documents

Publication Publication Date Title
Meiburger et al. Automated localization and segmentation techniques for B-mode ultrasound images: A review
Menchón-Lara et al. Automatic detection of the intima-media thickness in ultrasound images of the common carotid artery using neural networks
EP1690230B1 (fr) Procede de segmentation automatique en imagerie ultrasonore intravasculaire multidimensionnelle
KR101121396B1 (ko) 2차원 초음파 영상에 대응하는 2차원 ct 영상을 제공하는 시스템 및 방법
CN109767400B (zh) 一种导向三边滤波的超声图像散斑噪声去除方法
US20110125016A1 (en) Fetal rendering in medical diagnostic ultrasound
JP2016195764A (ja) 医用画像処理装置およびプログラム
KR20110013738A (ko) 2차원 초음파 영상에 대응하는 2차원 ct 영상을 제공하는 시스템 및 방법
JPWO2010116965A1 (ja) 医用画像診断装置、関心領域設定方法、医用画像処理装置、及び関心領域設定プログラム
US10405832B2 (en) Ultrasound diagnosis apparatus and method
JP2020503099A (ja) 出産前超音波イメージング
US20200330076A1 (en) An ultrasound imaging system and method
US11534133B2 (en) Ultrasonic detection method and ultrasonic imaging system for fetal heart
KR20120102447A (ko) 진단장치 및 방법
CN117017347B (zh) 超声设备的图像处理方法、系统及超声设备
WO2020132953A1 (fr) Procédé d'imagerie et dispositif d'imagerie ultrasonore
CN109310388B (zh) 一种成像方法和系统
EP4006832A1 (fr) Prédiction de la probabilité qu'un individu présente une ou plusieurs lésions
CN114159099A (zh) 乳腺超声成像方法及设备
CN111383323B (zh) 一种超声成像方法和系统以及超声图像处理方法和系统
CN112294361A (zh) 一种超声成像设备、盆底的切面图像生成方法
CN117557591A (zh) 一种基于超声图像的轮廓编辑方法和超声成像系统
Abid et al. Improving Segmentation of Breast Ultrasound Images: Semi Automatic Two Pointers Histogram Splitting Technique
CN114202514A (zh) 乳腺超声图像分割方法及设备
CN115778435A (zh) 胎儿颜面部的超声成像方法和超声成像系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944304

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/11/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18944304

Country of ref document: EP

Kind code of ref document: A1