CN114240971B - Self-adaptive segmentation method, device and storage medium based on multi-mode ultrasonic data - Google Patents
Self-adaptive segmentation method, device and storage medium based on multi-mode ultrasonic data Download PDFInfo
- Publication number
- CN114240971B CN114240971B CN202111579423.XA CN202111579423A CN114240971B CN 114240971 B CN114240971 B CN 114240971B CN 202111579423 A CN202111579423 A CN 202111579423A CN 114240971 B CN114240971 B CN 114240971B
- Authority
- CN
- China
- Prior art keywords
- distance
- sound velocity
- attenuation
- reflection
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 91
- 238000000034 method Methods 0.000 title claims abstract description 44
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 74
- 238000010586 diagram Methods 0.000 claims description 45
- 238000002604 ultrasonography Methods 0.000 claims description 25
- 238000003384 imaging method Methods 0.000 claims description 23
- 230000003044 adaptive effect Effects 0.000 claims description 22
- 201000011510 cancer Diseases 0.000 claims description 17
- 230000005540 biological transmission Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 230000003211 malignant effect Effects 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 abstract description 14
- 210000001519 tissue Anatomy 0.000 description 56
- 210000000481 breast Anatomy 0.000 description 16
- 206010006187 Breast cancer Diseases 0.000 description 6
- 208000026310 Breast neoplasm Diseases 0.000 description 6
- 238000012216 screening Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000000875 corresponding effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000005481 NMR spectroscopy Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000007665 sagging Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 206010048782 Breast calcifications Diseases 0.000 description 1
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- ZOKXTWBITQBERF-UHFFFAOYSA-N Molybdenum Chemical compound [Mo] ZOKXTWBITQBERF-UHFFFAOYSA-N 0.000 description 1
- 238000012274 Preoperative evaluation Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 210000005075 mammary gland Anatomy 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229910052750 molybdenum Inorganic materials 0.000 description 1
- 239000011733 molybdenum Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The application provides a self-adaptive segmentation method, a device and a storage medium based on multi-mode ultrasonic data, wherein the method comprises the following steps: and acquiring a plurality of reflected wave signals and refraction wave signals, obtaining an attenuation map and a sound velocity map by using the refraction wave signals, dividing a determination region of the attenuation map and the sound velocity map through an empirical threshold, and determining a plurality of tumor candidate regions divided by individual outlines in the reflection map based on the determination region. The method comprises the steps of dividing a suspicious interval again by the outline center point of each tumor candidate area, obtaining a reflection distance, a sound velocity record and an attenuation distance by combining the range end points of the suspicious interval and an empirical threshold segmentation, obtaining a tissue distance by combining the reflection distance, the sound velocity record and the attenuation distance, and solving the problem that the algorithm is unfriendly to multi-peak segmentation and noise by inputting the tissue distance into an OSTU.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a storage medium for adaptive segmentation based on multi-modal ultrasound data.
Background
In 2020, international agency for research on cancer (IARC) data of the world health organization showed that breast cancer developed a cancer case of 226 Mo Xin, surpassing lung cancer, and became the first cancer worldwide. In China, breast cancer is a healthy first killer of women, compared with European and American countries, the early detection rate of breast cancer in China is very low, breast cancer patients are screened out in a period of less than 20%, and European and American countries are high in 62%. The treatment of far beyond the meaning of early screening cancer is the key to influence the survival expectation and the treatment effect of patients.
At present, molybdenum target examination is mainly adopted on an image as a standard of breast imaging, but the key target has X-ray radiation; moreover, the breast is required to be pressed in the examination process, and the experience of the patient is poor; in addition, in the aspect of suspicious breast calcification evaluation of compact breast, the specificity is not high, and the missed diagnosis rate is as high as 76%. The breast of eastern women is relatively dense and the onset of breast cancer is low compared to western women. Therefore, there is an urgent need for a novel noninvasive, non-radiative, early screening means for breast cancer that is suitable for asian women's breasts.
The traditional ultrasonic imaging mode can only provide an image of a certain section of breast tissue, clinically, a doctor is required to spell out a three-dimensional structure of the breast tissue in the brain according to a real-time two-dimensional image, the subjectivity is high, the repeatability and the traceability are lacking, and inaccurate estimation results are easy to generate for the position, the shape and the size of a tumor.
Although the nuclear magnetic resonance has the advantage of sensitivity, the nuclear magnetic resonance has long time consumption and high cost, radiates to human bodies, and is not suitable for being used as an early screening means.
Although the improved ultrasonic examination equipment such as the ABUS of GE, ABVS of siemens can reconstruct and obtain a three-dimensional image, the representation of breast tissue is single because only reflected wave signals are acquired.
In addition, currently, the acquired mammary tissue image is mostly segmented according to an empirical threshold, so that the obtained segmentation result is inaccurate. Based on the prior art, an OSTU adaptive segmentation algorithm is adopted to adaptively obtain a threshold value of secondary segmentation after primary segmentation, and adaptive segmentation is performed again based on the threshold value of secondary segmentation, so that mammary tissue images can be obtained more accurately. However, the disadvantages of the OSTU algorithm have problems with being unfriendly to multi-peak segmentation and noise.
In summary, there is a need for an adaptive segmentation method based on multi-mode ultrasound data to achieve the purpose of performing adaptive adjustment on a threshold value to achieve accurate segmentation based on the multi-mode ultrasound data.
Disclosure of Invention
The embodiment of the application provides a self-adaptive segmentation method, a device and a storage medium based on multi-mode ultrasonic data, which aim at the problem that the position, the shape and the size of tumors in a mammary tissue image are not segmented accurately at present, the imaging details are improved by acquiring ultrasonic data of a plurality of modes, fusing the histological characteristics of the mammary gland in an omnibearing manner, and obtaining an auxiliary OSTU algorithm generation threshold value of tissue distance based on the multi-mode ultrasonic data, so that the segmentation accuracy of the OSTU algorithm under a multimodal value is improved.
In a first aspect, an embodiment of the present application provides an adaptive segmentation method based on multi-mode ultrasound data, the method including: acquiring reflected wave signals and transmitted wave signals which are acquired by an ultrasonic transducer and correspond to an imaging target; three-dimensionally reconstructing the reflected wave signal and the transmitted wave signal to obtain a reflection diagram, a sound velocity diagram and an attenuation diagram; setting an empirical threshold to obtain a sound velocity determination section of the sound velocity map and an attenuation determination section of the attenuation map; dividing the reflection map based on the overlapping area of the sound velocity determination area and the attenuation determination area to obtain a tumor candidate area; acquiring a reflection distance, a sound velocity distance and an attenuation distance of each pixel point in the tumor candidate region; combining the reflection distance, the sound velocity distance and the attenuation distance to calculate and obtain the tissue distance of the pixel point; and taking the feature map representing the tissue distance as input of an OSTU model, outputting a target segmentation threshold, and segmenting the tumor candidate region based on the target segmentation threshold to obtain a target image.
In some of these embodiments, "obtaining the reflection distance, the sound velocity distance, and the attenuation distance of each pixel point in the tumor candidate region" includes: calculating the contour center point of the tumor candidate region; presetting an image range, and respectively taking the outline center point as a center in the sound velocity diagram, the attenuation diagram and the reflection diagram to obtain a sound velocity suspicious interval, an attenuation suspicious interval and a reflection suspicious interval in the image range; obtaining a reflection value, a sound velocity value and an attenuation value of each pixel point in the tumor candidate region, obtaining a reflection distance according to a section where the reflection value is located, obtaining a sound velocity distance according to a section where the sound velocity value is located, and obtaining an attenuation distance according to a section where the attenuation value is located.
In some of these embodiments, obtaining the reflection distance for each pixel in the tumor candidate region comprises:
Wherein r is a reflection value, D (r) m is a reflection distance, In order to reflect the suspicious interval,Is a tumor candidate.
In some of these embodiments, obtaining the sound speed distance for each pixel point in the tumor candidate region comprises:
where s is the sound velocity value, D(s) m is the sound velocity distance, [1.49,1.55] is the empirical threshold of sound velocity, Is the suspicious interval of sound velocity.
In some of these embodiments, obtaining the attenuation distance for each pixel in the tumor candidate region comprises:
wherein a is an attenuation value, [0.12,0.20] is an empirical threshold of attenuation rate, To attenuate suspicious intervals.
In some of these embodiments, the "tissue distance of the pixel calculated in combination with the reflection distance, the sound velocity distance, and the attenuation distance" includes: performing Euclidean distance calculation according to the reflection distance, the sound velocity distance and the attenuation distance to obtain a tissue distance; the tissue distance is expressed as:
Where D (r, s, a) m is the tissue distance, D (r) m is the reflection distance, D(s) m is the sound velocity distance, and D (a) m is the attenuation distance.
In some of these embodiments, the "segmenting the reflection map based on the overlapping region of the sound velocity determination region and the attenuation determination region to obtain a tumor candidate region" includes: and taking the overlapping area of the sound velocity determination area and the attenuation determination area as a reflection determination area of the reflection map, and removing a noise area in the reflection determination area to obtain a tumor candidate area.
In some embodiments, the ultrasound transducer is a hemispherical ultrasound transducer including a plurality of signal transmitting devices that transmit wave signals to different orientations of the imaging target and a plurality of signal receiving devices that receive the reflected wave signals and the transmitted wave signals from the different orientations of the imaging target.
In a second aspect, an embodiment of the present application provides an adaptive segmentation apparatus based on multi-modal ultrasound data, including: the signal acquisition module is used for acquiring reflected wave signals and transmitted wave signals which are acquired by the ultrasonic transducer and correspond to the imaging target; the image reconstruction module is used for reconstructing the reflected wave signal and the transmitted wave signal in a three-dimensional way to obtain a reflection diagram, a sound velocity diagram and an attenuation diagram; the preliminary segmentation module is used for setting an empirical threshold to acquire a sound velocity determination interval of the sound velocity map and an attenuation determination interval of the attenuation map; dividing the reflection map based on the overlapping area of the sound velocity determination area and the attenuation determination area to obtain a tumor candidate area; the tissue distance calculation module is used for obtaining the reflection distance, the sound velocity distance and the attenuation distance of each pixel point in the tumor candidate area; combining the reflection distance, the sound velocity distance and the attenuation distance to calculate and obtain the tissue distance of the pixel point; and the secondary segmentation module is used for taking the characteristic diagram representing the tissue distance as the input of an OSTU model, outputting a target segmentation threshold value, and segmenting the tumor candidate region based on the target segmentation threshold value to obtain a target image.
In a third aspect, an embodiment of the application provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the method of adaptive segmentation based on multi-modal ultrasound data of any of the first aspects.
In a fourth aspect, embodiments of the present application provide a computer program product comprising software code portions for performing the method of adaptive segmentation based on multi-modal ultrasound data according to any one of the first aspects when the computer program product is run on a computer.
In a fifth aspect, an embodiment of the present application provides a readable storage medium having stored therein a computer program comprising program code for controlling a process to perform a process comprising the method of adaptive segmentation based on multi-modal ultrasound data according to any one of the first aspects.
The main contributions and innovation points of the embodiment of the application are as follows:
The method can be used for displaying the image of the target object, for example, ultrasonic transducers are used for transmitting and receiving ultrasonic waves acting on the mammary tissue part, the image of the mammary tissue part is displayed through the radiation force of the ultrasonic waves, and the tumor in the image is comprehensively segmented based on the sound velocity diagram, the reflection diagram and the attenuation diagram, so that the aim of high segmentation accuracy is fulfilled.
In the scheme, firstly, reflected wave signals and transmitted wave signals received by an ultrasonic transducer are subjected to three-dimensional reconstruction to obtain ultrasonic data of a plurality of modes, namely a reflection image, a sound velocity image and an attenuation image, and in the field of medical images, the characteristics of a tumor can not be completely represented by only adopting a gray image (reflection image) of a focus area, and the situation of inaccurate segmentation can possibly occur, so that the scheme combines the sound velocity image, but additionally introduces the reflection image and the attenuation image to assist in preliminary segmentation.
In addition, the existing OSTU (Ojin method) is not friendly to multi-peak segmentation and noise, so that the scheme firstly adopts an empirical threshold to perform lump primary screening, selects an image subset based on the empirical threshold to obtain tissue distance, takes a characteristic diagram representing the tissue distance as input of an OSTU algorithm, and effectively improves the accuracy and effectiveness of the segmentation algorithm.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a flow chart of the main steps of an adaptive segmentation method based on multi-modal ultrasound data according to a first embodiment of the present application.
Fig. 2 is a schematic diagram of a sound speed determination section and an attenuation determination section of the attenuation map in the first embodiment.
Fig. 3 is a block diagram of a multi-modality ultrasound data based adaptive segmentation apparatus according to a second embodiment of the present application.
Fig. 4 is a schematic hardware structure of an electronic device according to a third embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with aspects of one or more embodiments of the present description as detailed in the accompanying claims.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
Fig. 1 is a flow chart of the main steps of an adaptive segmentation method based on multi-modal ultrasound data according to a first embodiment of the present application.
To achieve this object, as shown in fig. 1, the adaptive segmentation method based on multi-modality ultrasound data mainly includes steps S101 to S107 as follows.
Step S101, obtaining reflected wave signals and transmitted wave signals which are acquired by an ultrasonic transducer and correspond to an imaging target.
It should be noted that the imaging target may be a breast tissue site, or may be another body part of the subject, for example, when detecting another body part of the subject, the detection site is fixed as the imaging target on an ultrasonic transducer, where medium water is filled between the ultrasonic transducer and the imaging target, and the ultrasonic transducer includes a signal transmitting device and a signal receiving device; and sending out ultrasonic waves to a preset imaging area of an imaging target by the signal sending device, and collecting reflected wave signals and transmitted wave signals by the signal receiving device.
The purpose of acquiring the transmission wave signal in this step is to reconstruct and obtain a sound velocity map and an attenuation map, specifically, in the prior art, only the reflection wave signal is acquired to calculate and obtain a reflection map, and the position of the target image is confirmed according to the reflection map. The multi-mode ultrasonic data are required to be acquired, so that transmission wave signals are required to be acquired at the data acquisition level, and a reflection diagram, a sound velocity diagram and an attenuation diagram are obtained based on three-dimensional reconstruction of the reflection wave signals and the transmission wave signals.
In one possible embodiment, the ultrasonic transducer is a hemispherical ultrasonic transducer, and the hemispherical ultrasonic transducer includes a plurality of signal transmitting devices and a plurality of signal receiving devices, and the plurality of signal transmitting devices transmit the transmission wave signals to different orientations of the imaging target, and the plurality of signal receiving devices receive the reflection wave signals and the transmission wave signals transmitted from different orientations of the imaging target.
Specifically, a large number of ultrasonic signal transmitting devices and ultrasonic signal receiving devices are arranged in the hemispherical ultrasonic transducer (USCT), so that after ultrasonic signals are transmitted to an imaging target by USCT equipment, reflected wave signals and transmitted wave signals in different directions can be obtained through the large number of ultrasonic signal receiving devices.
Illustratively, the basic principle of using USCT to acquire a breast tissue site is that an ultrasound transducer acquires its reflected and transmitted wave signals in a natural sagging state by surrounding the patient's breast as a couplant. Because the breast presents a natural sagging state in the process of data acquisition, the imaging data can be clearer and more accurate. Meanwhile, the data can be generated only by single scanning, and the problem that the acquired image data is inaccurate due to the movement of a patient in the image scanning process is also reduced as far as possible.
For the step S101, the method directly obtains a plurality of reflected wave signals and transmitted wave signals through USCT equipment, and reconstructs the transmitted wave signals to obtain a sound velocity diagram and an attenuation diagram, wherein the sound velocity diagram and the attenuation diagram can reflect the intrinsic characteristics of breast tissues and can assist the reflection diagram to divide malignant tumors from normal tissues in subsequent steps.
And step S102, reconstructing the reflected wave signal and the transmitted wave signal in a three-dimensional way to obtain a reflection diagram, a sound velocity diagram and an attenuation diagram.
Specifically, an algebraic reconstruction Algorithm (ART) may be employed to reconstruct the sound velocity map and the attenuation map. The reflection map is obtained by adopting the improved Synthetic Aperture Focusing (SAFT) reconstruction.
Step S103, setting an empirical threshold to acquire a sound velocity determination section of the sound velocity map and an attenuation determination section of the attenuation map.
Specifically, the image may be pre-segmented by an empirical threshold value in the primary segmentation, the empirical threshold value being statistically derived from the historical segmentation threshold points, the empirical threshold value being set to treat a tissue portion in the image higher than the empirical threshold value in the primary segmentation as a malignant region within the empirical threshold value.
Illustratively, in this scenario a sound speed threshold of 1.52+ -0.03 km/s and an attenuation threshold of 0.16+ -0.04 dB/cm is given. It can be seen from fig. 2 that the lowest value of the sound velocity threshold value of 1.49km/s is obtained as a region within a small circle, and the highest value of the sound velocity threshold value of 1.55km/s is obtained as a region within a large circle, and similarly, the attenuation threshold value can be divided into a region within a small circle and a region within a large circle by the lowest value and the highest value. Wherein the small circle region of the sound velocity map represents the sound velocity determination region of the sound velocity map, and the small circle region of the attenuation map represents the attenuation determination region of the attenuation map.
The sound velocity determination section obtained in this division is: the malignant region can be determined by the lowest value of the empirical threshold in the preliminary segmentation, so that the image within the region represents malignant tumor, and the image outside the region may be malignant tumor or may be only a tissue portion. The attenuation determination interval is the same, and means that: the malignant region can be determined by the lowest value of the empirical threshold in the preliminary segmentation, the image in the region represents malignant tumor, and the image outside the region may be malignant tumor or may be only a tissue part.
It should be noted that, in this embodiment, the lowest value of the empirical threshold is selected for preliminary segmentation, so as to obtain an image of an area that is necessarily a malignant tumor. In other words, the empirical threshold is a range of values, from which the actual target segmentation threshold for the exact segmentation is selected, depending on the measurement error or the patient situation. Therefore, the scheme adopts the lowest value of the experience threshold to divide, and ensures that the image divided by the lowest value is certain to be within the image divided by the target division threshold, so that the image of the small circle area obtained by division is certain to be malignant tumor.
Obviously, only rough segmentation is obtained by adopting the step, namely, a part of the area possibly of malignant tumor is discharged as normal tissue, and the segmented result is a problem of inaccurate result for clinical targeting. Therefore, in the scheme, only the tested threshold value is taught as a pre-segmentation basis, and the threshold value is corrected in the subsequent step, so that the boundary between the malignant tumor and the normal tissue is better found.
And step S104, dividing the reflection map based on the overlapping area of the sound velocity determination area and the attenuation determination area to obtain a tumor candidate area.
In this step, since these three images are basically reconstructed based on the transmitted wave and reflected wave signals acquired at the same time, there is no need for registration, and there is naturally a consistency in physical space, and based on the consistency in physical space of the reflection map, the sound velocity map, and the attenuation map, the overlapping area of the sound velocity determination area and the attenuation determination area is taken as the reflection determination area of the reflection map, and the area is divided.
Referring again to fig. 2, the reflection determination region is represented by a dark shade, which is an overlapping portion where two small circles intersect, and in this embodiment the reflection determination region may be directly used as a tumor candidate region, and the region may be segmented.
The method is characterized by comprising the following steps: different acoustic signals reflect different object features inside the breast tissue, and traditional image segmentation algorithms only depend on what features of the reflected image, and lack sound velocity and attenuation information that can represent the features inside the tissue. The scheme has higher segmentation accuracy and clinical reference value by integrating the segmentation strategy of the multi-mode image. Malignant tumors often have irregular edges, which can make segmentation difficult. Whereas segmentation based on empirical thresholds of the sound velocity map and decay rate map takes into account intrinsic characteristics of breast tissue, has smoother segmentation than segmentation.
In another embodiment of the present application, the bump candidate region is obtained by removing the noise region in the reflection determination section.
Specifically, the image of the reflection determination section is binarized, the outlines of n primary candidate areas are obtained through an outline obtaining algorithm, the size of each primary candidate area is compared with a preset threshold value, if the size of each primary candidate area is larger than the preset threshold value, the primary candidate area is indicated to be a tumor candidate area, and if the size of each primary candidate area is smaller than the preset threshold value, the primary candidate area is indicated to be a noise area. Wherein the preset threshold is set to be larger than the area size of the general noise. Since each of the tumor candidate regions needs to be processed later in the scheme, the calculation cost for calculating the subsequent tumor candidate regions can be reduced by removing the noise region in the present embodiment.
For the above steps S103 to S104, the present embodiment determines the reflection determination section of the reflection map based on the sound velocity determination section and the attenuation determination section, and obtains the tumor candidate region segmented by the empirical threshold by removing the noise area in the reflection determination section. Compared with the prior art, the technical point of the scheme is as follows: the acquisition of the reflection determination section is not obtained by direct segmentation from the reflection experience threshold value, but is segmented by combining sound velocity and attenuation information which can represent internal characteristics of tissues, so that the segmentation result can also distinguish normal tissues from malignant tumors.
Step S105, obtaining a reflection distance, a sound velocity distance, and an attenuation distance of each pixel point in the tumor candidate region.
And step S106, combining the reflection distance, the sound velocity distance and the attenuation distance to calculate the tissue distance of the pixel point.
Specifically, step S105 includes:
calculating the contour center point of the tumor candidate region;
Presetting an image range, and respectively taking the outline center point as a center in the sound velocity diagram, the attenuation diagram and the reflection diagram to obtain a sound velocity suspicious interval, an attenuation suspicious interval and a reflection suspicious interval in the image range;
obtaining a reflection value, a sound velocity value and an attenuation value of each pixel point in the tumor candidate region, obtaining a reflection distance according to a section where the reflection value is located, obtaining a sound velocity distance according to a section where the sound velocity value is located, and obtaining an attenuation distance according to a section where the attenuation value is located.
The contour center point is a point representing the center of the contour coordinate point of the tumor candidate region. Specifically, after each contour coordinate point is calculated, coordinate values are obtained by adding and averaging, and the coordinate values represent the position of the contour center point.
And selecting a sound velocity interval, an attenuation interval and a reflection interval in an experience range as suspicious intervals of the tumor candidate areas by taking the corresponding contour center point of each tumor candidate area as a center.
It should be noted that the empirical range, for example, 8×8cm, is a range obtained from historical experience that can both include the tumor volume and effectively reduce the introduction of the non-tumor region to bring about ineffective calculation, and is employed for preliminary but as small as possible segmentation of the suspicious region of each tumor candidate region. Of course, the experience range is not limited to rectangle, but can be round or other figures, and the description is not repeated here.
In the present application, the reflection value, the sound velocity value, and the attenuation value are obtained from the reflected wave signal and the transmitted wave signal.
The obtaining of the reflection distance of each pixel point in the tumor candidate area comprises the following steps:
Wherein r is a reflection value, D (r) m is a reflection distance, In order to reflect the suspicious interval,Is a tumor candidate.
The obtaining of the sound velocity distance of each pixel point in the tumor candidate region comprises:
where s is the sound velocity value, D(s) m is the sound velocity distance, [1.49,1.55] is the empirical threshold of sound velocity, Is the suspicious interval of sound velocity.
The obtaining of the attenuation distance of each pixel point in the tumor candidate area comprises the following steps:
wherein a is an attenuation value, [0.12,0.20] is an empirical threshold of attenuation rate,
The following describes the above formula in detail:
For any tumor candidate region, the center position O m(Xm,Ym of the tumor candidate region is defined, the single-dimensional distance of the determined region is defined as 0, the dimensional distance (0, 1) of the suspicious region is defined, and the tissue distance is the Euclidean distance of the comprehensive three-mode distance. The reflection distance, the sound velocity distance, and the attenuation distance of each candidate region can be represented in segments.
Therefore, the reflection distance, the sound velocity distance and the attenuation distance of the pixel value are determined through the reflection value, the sound velocity value and the attenuation value of the pixel value, and the tissue distance is obtained by integrating the distances of the three modes.
Specifically, performing Euclidean distance calculation according to the reflection distance, the sound velocity distance and the attenuation distance to obtain a tissue distance; the tissue distance is expressed as:
Where D (r, s, a) m is the tissue distance, D (r) m is the reflection distance, D(s) m is the sound velocity distance, and D (a) m is the attenuation distance.
The tissue distance visual field range obtained by adopting the scheme is set reasonably, the OSTU algorithm cannot be invalid under the condition of multiple peak values, and meanwhile, the interference of noise is avoided to the greatest extent.
And step S107, taking the feature map representing the tissue distance as the input of an OSTU model, outputting a target segmentation threshold, and segmenting the tumor candidate region based on the target segmentation threshold to obtain a target image.
In particular, a feature map characterizing tissue distance refers to a subset of feature images centered on a contour center point and within an empirical range. Taking the tissue distance as a new gray value in the characteristic image subset, wherein the smaller the tissue distance is, the smaller the gray value is; the larger the tissue distance, the larger the gray value. The introduction of the feature image subset effectively reduces the visual field range, the area is larger than the size of a common tumor, and the problems of multiple peaks and inaccurate segmentation caused by OSTU calculation of the whole image are avoided.
Specifically, OSTU (oxford method) is a widely used adaptive threshold segmentation algorithm. It finds the maximum value of the variance between foreground and background through the intrinsic feature information of the image, and determines the segmentation threshold through the maximum value. In the medical image field, the gray level map (reflection map) of the focal region cannot fully represent the characteristics of the tumor, and the situation of inaccurate segmentation may occur.
The sound velocity map is highly correlated with tissue density, which in turn is correlated with mass properties. Therefore, the technical advantages of the sound velocity image on tumor segmentation are maintained, and the effects of the reflection image and the attenuation image on image segmentation are combined, so that the accuracy of the obtained segmentation result is higher.
In summary, the embodiment of the application provides a self-adaptive segmentation method based on multi-mode ultrasonic data, which aims to accurately segment an image of a breast tissue part of a patient, so that the segmented image not only can know the area of a tumor, but also has a guiding significance for preoperative evaluation and operation planning, and the tumor morphology and the boundary division of the tumor and normal tissues are more accurate.
In the scheme, a plurality of reflected wave signals and refraction wave signals are collected through USCT, an attenuation chart and a sound velocity chart can be obtained by utilizing the refraction wave signals, the determined areas of the attenuation chart and the sound velocity chart are divided through an empirical threshold, and a plurality of tumor candidate areas divided by independent outlines are determined in the reflection chart based on the determined areas.
The method comprises the steps of dividing a suspicious interval again by the outline center point of each tumor candidate area, obtaining a reflection distance, a sound velocity record and an attenuation distance by combining the range end points of the suspicious interval and an empirical threshold segmentation, obtaining a tissue distance by combining the reflection distance, the sound velocity record and the attenuation distance, and solving the problem that the algorithm is unfriendly to multi-peak segmentation and noise by inputting the tissue distance into an OSTU.
Fig. 3 is a block diagram of a configuration of an adaptive segmentation apparatus based on multi-modality ultrasound data according to a second embodiment of the present application.
As shown in fig. 3, an embodiment of the present application proposes an adaptive segmentation apparatus based on multi-modal ultrasound data, including:
The signal acquisition module 301 is configured to acquire a reflected wave signal and a transmitted wave signal acquired by the ultrasonic transducer and corresponding to an imaging target.
The image reconstruction module 302 is configured to reconstruct the reflected wave signal and the transmitted wave signal in three dimensions to obtain a reflection map, a sound velocity map and an attenuation map.
A preliminary segmentation module 303, configured to set an empirical threshold to obtain a sound velocity determination interval of the sound velocity map and an attenuation determination interval of the attenuation map; dividing the reflection map based on the overlapping area of the sound velocity determination area and the attenuation determination area to obtain a tumor candidate area;
A tissue distance calculation module 304, configured to obtain a reflection distance, a sound velocity distance, and an attenuation distance of each pixel point in the tumor candidate region; and combining the reflection distance, the sound velocity distance and the attenuation distance to calculate the tissue distance of the pixel point.
The secondary segmentation module 305 is configured to take the feature map characterizing the tissue distance as an input of an OSTU model, output a target segmentation threshold, and segment the tumor candidate region based on the target segmentation threshold to obtain a target image.
Fig. 4 is a schematic hardware structure of an electronic device according to a third embodiment of the present application.
As shown in fig. 4, the electronic device according to one embodiment of the present application includes a memory 404 and a processor 402, where the memory 404 stores a computer program, and the processor 402 is configured to run the computer program to perform the steps in any of the method embodiments described above.
In particular, the processor 402 may include a Central Processing Unit (CPU), or an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, abbreviated as ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
The memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, memory 404 may comprise a hard disk drive (HARDDISKDRIVE, abbreviated HDD), a floppy disk drive, a solid state drive (SolidStateDrive, abbreviated SSD), flash memory, an optical disk, a magneto-optical disk, a magnetic tape, or a Universal Serial Bus (USB) drive, or a combination of two or more of these. Memory 404 may include removable or non-removable (or fixed) media, where appropriate. Memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 404 includes Read-only memory (ROM) and Random Access Memory (RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (ProgrammableRead-only memory, abbreviated PROM), an erasable PROM (ErasableProgrammableRead-only memory, abbreviated EPROM), an electrically erasable PROM (ElectricallyErasableProgrammableRead-only memory, abbreviated EEPROM), an electrically rewritable ROM (ElectricallyAlterableRead-only memory, abbreviated EAROM) or a FLASH memory (FLASH), or a combination of two or more of these. The RAM may be a static random access memory (StaticRandom-access memory, abbreviated SRAM) or a dynamic random access memory (DynamicRandomAccessMemory, abbreviated DRAM) where the DRAM may be a fast page mode dynamic random access memory 404 (FastPageModeDynamicRandomAccessMemory, abbreviated FPMDRAM), an extended data output dynamic random access memory (ExtendedDateOutDynamicRandomAccessMemory, abbreviated EDODRAM), a synchronous dynamic random access memory (SynchronousDynamicRandom-access memory, abbreviated SDRAM), or the like, where appropriate.
Memory 404 may be used to store or cache various data files that need to be processed and/or used for communication, as well as possible computer program instructions for execution by processor 402.
The processor 402 implements any of the adaptive segmentation methods based on multi-modal ultrasound data in the above embodiments by reading and executing computer program instructions stored in the memory 404.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402 and the input/output device 408 is connected to the processor 402.
The transmission device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wired or wireless network provided by a communication provider of the electronic device. In one example, the transmission device includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through the base station to communicate with the internet. In one example, the transmission device 406 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
The input-output device 408 is used to input or output information. In this embodiment, the input information may be a three-dimensional image or the like, and the output information may be a division result or the like.
Alternatively, in the present embodiment, the above-mentioned processor 402 may be configured to execute the following steps by a computer program:
S101, obtaining reflected wave signals and transmitted wave signals which are acquired by an ultrasonic transducer and correspond to an imaging target.
S102, reconstructing the reflected wave signal and the transmitted wave signal in a three-dimensional mode to obtain a reflection diagram, a sound velocity diagram and an attenuation diagram.
S103, setting an empirical threshold to acquire a sound velocity determination section of the sound velocity map and an attenuation determination section of the attenuation map.
And S104, dividing the reflection map based on the overlapping area of the sound velocity determination area and the attenuation determination area to obtain a tumor candidate area.
S105, obtaining the reflection distance, the sound velocity distance and the attenuation distance of each pixel point in the tumor candidate area.
And S106, combining the reflection distance, the sound velocity distance and the attenuation distance to calculate the tissue distance of the pixel point.
And S107, taking the feature map representing the tissue distance as input of an OSTU model, outputting a target segmentation threshold, and segmenting the tumor candidate region based on the target segmentation threshold to obtain a target image.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of a mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets, and/or macros can be stored in any apparatus-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may include one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. In addition, in this regard, it should be noted that any blocks of the logic flows as illustrated may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on a physical medium such as a memory chip or memory block implemented within a processor, a magnetic medium such as a hard disk or floppy disk, and an optical medium such as, for example, a DVD and its data variants, a CD, etc. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that the technical features of the above embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The foregoing examples illustrate only a few embodiments of the application, which are described in greater detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.
Claims (8)
1. The self-adaptive segmentation method based on the multi-mode ultrasonic data is characterized by comprising the following steps of:
Acquiring reflected wave signals and transmitted wave signals which are acquired by an ultrasonic transducer and correspond to an imaging target;
Three-dimensionally reconstructing the reflected wave signal and the transmitted wave signal to obtain a reflection diagram, a sound velocity diagram and an attenuation diagram;
Setting an empirical threshold to obtain a sound velocity determination section of the sound velocity map and an attenuation determination section of the attenuation map, wherein the sound velocity determination section refers to: in the preliminary segmentation, the malignant tumor region which can be determined by the lowest value of the empirical threshold is defined as an attenuation determination interval: malignant areas determinable by empirical threshold minima in the preliminary segmentation;
Taking the overlapping area of the sound velocity determination section and the attenuation determination section as a reflection determination section of the reflection map, and removing a noise area in the reflection determination section to obtain a tumor candidate area; acquiring a reflection distance, a sound velocity distance and an attenuation distance of each pixel point in the tumor candidate region;
Performing Euclidean distance calculation according to the reflection distance, the sound velocity distance and the attenuation distance to obtain a tissue distance; the tissue distance is expressed as:
Wherein D (r, s, a) m is the tissue distance, D (r) m is the reflection distance, D(s) m is the sound velocity distance, and D (a) m is the attenuation distance; and taking the feature map representing the tissue distance as input of an OSTU model, outputting a target segmentation threshold, and segmenting the tumor candidate region based on the target segmentation threshold to obtain a target image.
2. The adaptive segmentation method based on multi-modal ultrasound data according to claim 1, wherein "acquiring the reflection distance, the sound velocity distance, and the attenuation distance of each pixel point in the tumor candidate region" includes:
calculating the contour center point of the tumor candidate region;
Presetting an image range, and respectively taking the outline center point as a center in the sound velocity diagram, the attenuation diagram and the reflection diagram to obtain a sound velocity suspicious interval, an attenuation suspicious interval and a reflection suspicious interval in the image range;
obtaining a reflection value, a sound velocity value and an attenuation value of each pixel point in the tumor candidate region, obtaining a reflection distance according to a section where the reflection value is located, obtaining a sound velocity distance according to a section where the sound velocity value is located, and obtaining an attenuation distance according to a section where the attenuation value is located.
3. The method of claim 2, wherein obtaining the reflection distance for each pixel in the tumor candidate region comprises:
Wherein r is a reflection value, D (r) m is a reflection distance, In order to reflect the suspicious interval, Is a tumor candidate.
4. The method of claim 1, wherein obtaining a sound velocity distance for each pixel point in the tumor candidate region comprises:
where s is the sound velocity value, D(s) m is the sound velocity distance, [1.49,1.55] is the empirical threshold of sound velocity, Is the suspicious interval of sound velocity.
5. The method of claim 2, wherein obtaining the attenuation distance for each pixel point in the tumor candidate region comprises:
wherein a is an attenuation value, [0.12,0.20] is an empirical threshold of attenuation rate, To attenuate suspicious intervals.
6. The adaptive segmentation method based on multi-mode ultrasound data according to claim 1, wherein the ultrasound transducer is a hemispherical ultrasound transducer, the hemispherical ultrasound transducer includes a plurality of signal transmitting devices and a plurality of signal receiving devices, the plurality of signal transmitting devices transmit transmission wave signals to different orientations of the imaging target, and the plurality of signal receiving devices receive the reflection wave signals and the transmission wave signals transmitted from the different orientations of the imaging target.
7. An adaptive segmentation apparatus based on multi-modal ultrasound data, comprising:
the signal acquisition module is used for acquiring reflected wave signals and transmitted wave signals which are acquired by the ultrasonic transducer and correspond to the imaging target;
The image reconstruction module is used for reconstructing the reflected wave signal and the transmitted wave signal in a three-dimensional way to obtain a reflection diagram, a sound velocity diagram and an attenuation diagram;
The primary segmentation module is used for setting an empirical threshold to obtain a sound velocity determination interval of the sound velocity graph and an attenuation determination interval of the attenuation graph, wherein the sound velocity determination interval is: in the preliminary segmentation, the malignant tumor region which can be determined by the lowest value of the empirical threshold is defined as an attenuation determination interval: malignant areas determinable by empirical threshold minima in the preliminary segmentation;
taking the overlapping area of the sound velocity determination section and the attenuation determination section as a reflection determination section of the reflection map, and removing a noise area in the reflection determination section to obtain a tumor candidate area;
the tissue distance calculation module is used for obtaining the reflection distance, the sound velocity distance and the attenuation distance of each pixel point in the tumor candidate area;
Performing Euclidean distance calculation according to the reflection distance, the sound velocity distance and the attenuation distance to obtain a tissue distance; the tissue distance is expressed as:
Wherein D (r, s, a) m is the tissue distance, D (r) m is the reflection distance, D(s) m is the sound velocity distance, and D (a) m is the attenuation distance;
And the secondary segmentation module is used for taking the characteristic diagram representing the tissue distance as the input of an OSTU model, outputting a target segmentation threshold value, and segmenting the tumor candidate region based on the target segmentation threshold value to obtain a target image.
8. A readable storage medium, characterized in that the readable storage medium has stored therein a computer program comprising program code for controlling a process to perform a process comprising the adaptive segmentation method based on multi-modal ultrasound data according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111579423.XA CN114240971B (en) | 2021-12-22 | 2021-12-22 | Self-adaptive segmentation method, device and storage medium based on multi-mode ultrasonic data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111579423.XA CN114240971B (en) | 2021-12-22 | 2021-12-22 | Self-adaptive segmentation method, device and storage medium based on multi-mode ultrasonic data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114240971A CN114240971A (en) | 2022-03-25 |
CN114240971B true CN114240971B (en) | 2024-09-06 |
Family
ID=80761134
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111579423.XA Active CN114240971B (en) | 2021-12-22 | 2021-12-22 | Self-adaptive segmentation method, device and storage medium based on multi-mode ultrasonic data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114240971B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110430819A (en) * | 2016-12-02 | 2019-11-08 | 戴尔菲纳斯医疗科技公司 | Waveform enhancing reflection and margo characterization for ultrasonic tomography |
CN113598825A (en) * | 2021-09-16 | 2021-11-05 | 浙江衡玖医疗器械有限责任公司 | Breast positioning imaging method for ultrasonic imaging system and application thereof |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10143443B2 (en) * | 2014-05-05 | 2018-12-04 | Delphinus Medical Technologies, Inc. | Method for representing tissue stiffness |
US10285667B2 (en) * | 2014-08-05 | 2019-05-14 | Delphinus Medical Technologies, Inc. | Method for generating an enhanced image of a volume of tissue |
-
2021
- 2021-12-22 CN CN202111579423.XA patent/CN114240971B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110430819A (en) * | 2016-12-02 | 2019-11-08 | 戴尔菲纳斯医疗科技公司 | Waveform enhancing reflection and margo characterization for ultrasonic tomography |
CN113598825A (en) * | 2021-09-16 | 2021-11-05 | 浙江衡玖医疗器械有限责任公司 | Breast positioning imaging method for ultrasonic imaging system and application thereof |
Also Published As
Publication number | Publication date |
---|---|
CN114240971A (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11298111B2 (en) | Method for generating an enhanced image of a volume of tissue | |
US8246543B2 (en) | Imaging method utilizing attenuation and speed parameters in inverse scattering techniques | |
US11147537B2 (en) | Method for representing tissue stiffness | |
Mendelson et al. | Acr bi-rads® ultrasound | |
CN110430819B (en) | Waveform enhanced reflection and edge boundary characterization for ultrasound tomography | |
JP2013519455A (en) | How to characterize a patient's tissue | |
Duric et al. | In-vivo imaging results with ultrasound tomography: Report on an ongoing study at the Karmanos Cancer Institute | |
WO2013022454A1 (en) | Method for imaging a volume of tissue | |
Liao et al. | Strain‐compounding technique with ultrasound Nakagami imaging for distinguishing between benign and malignant breast tumors | |
CN115100164A (en) | Ultrasonic CT (computed tomography) transition time automatic extraction method based on edge detection | |
JP7349202B2 (en) | Ultrasonic signal processing method, device, equipment and storage medium | |
JP5579535B2 (en) | Ultrasound system and method for measuring fetal rib count | |
CN114240971B (en) | Self-adaptive segmentation method, device and storage medium based on multi-mode ultrasonic data | |
Ranger et al. | Breast imaging with ultrasound tomography: a comparative study with MRI | |
JP6934100B1 (en) | Model learning methods, devices, electronics and media | |
Józwik | Algorithm for the Fusion of Ultrasound Tomography Breast Images Allowing Automatic Discrimination Between Benign and Malignant Tumors in Screening Tests | |
RAPELYEA | Breast Ultrasound Indications and Interpretation | |
Opieliński et al. | Algorithm for the Fusion of Ultrasound Tomography Breast Images Allowing Automatic Discrimination Between Benign and Malignant Tumors in Screening Tests | |
JP2018088990A (en) | Ultrasonic imaging device, image processing device, and image processing method | |
Arregui García | Exploring the Physics and Emerging Clinical Applications of Quantitative Ultrasound in Oncology: A Comprehensive Review | |
KR20240015562A (en) | Method and system of linking ultrasound image data associated with a medium with other image data associated with the medium | |
Ghahramani Z et al. | Deep learning-enhanced 3D echo decorrelation imaging for monitoring radiofrequency ablation in ex vivo human liver | |
Omoto et al. | Study of the automated breast tumor extraction using 3D ultrasound imaging: The usefulness of depth-width ratio and surface-volume index | |
Mann | Automated Breast Ultrasound | |
Shamim et al. | A Complete Breast Cancer Detection Approach Via Quantitative and Qualitative Analysis of Breast Ultrasound (BUS) Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |