CN114140332A - Deep learning-based breast ultrasound enhancement method - Google Patents

Deep learning-based breast ultrasound enhancement method Download PDF

Info

Publication number
CN114140332A
CN114140332A CN202110967233.9A CN202110967233A CN114140332A CN 114140332 A CN114140332 A CN 114140332A CN 202110967233 A CN202110967233 A CN 202110967233A CN 114140332 A CN114140332 A CN 114140332A
Authority
CN
China
Prior art keywords
layer
image
gland
instrument
mammary gland
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110967233.9A
Other languages
Chinese (zh)
Inventor
牛旗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuge Technology Tianjin Co ltd
Original Assignee
Fuge Technology Tianjin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuge Technology Tianjin Co ltd filed Critical Fuge Technology Tianjin Co ltd
Priority to CN202110967233.9A priority Critical patent/CN114140332A/en
Publication of CN114140332A publication Critical patent/CN114140332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Abstract

The invention discloses a mammary gland ultrasonic enhancement method based on deep learning, and belongs to the field of image enhancement. The method is realized by eliminating the influence of a fat layer and a muscle layer in the mammary gland ultrasonic video and improving the quality of the mammary gland ultrasonic video of a portable instrument. Firstly, a mammary gland ultrasonic video with low image quality acquired by a portable instrument is intercepted into a mammary gland ultrasonic image, and a gland region in the mammary gland ultrasonic video is segmented by UNet, so that the false positive rate caused by the interference of a fat layer and a muscle layer is reduced; the glandular region of the fuzzy breast ultrasound image is then enhanced using a cycle-generated confrontation network (CycleGan). The image quality of the gland region in the breast ultrasonic video of the portable equipment is improved, and the image quality and the accuracy of the detection result during target detection are further improved. The invention is suitable for the fields of medical treatment and the like, is used for enhancing the detected image, and is beneficial to the popularization of portable ultrasonic equipment.

Description

Deep learning-based breast ultrasound enhancement method
Technical Field
The invention relates to a mammary gland ultrasonic enhancement method based on deep learning, and belongs to the technical field of image enhancement.
Background
In the traditional mode, a patient in a first-aid diagnosis with abnormal breasts needs to go to a hospital again and is detected by a doctor with a high-end ultrasonic instrument to obtain a breast ultrasonic video, and the doctor judges the properties of the breast lumps according to the breast ultrasonic video. In order to improve the convenience of both doctors and patients, portable ultrasonic equipment is widely popularized in some hospitals, patients can detect and acquire own breast ultrasonic videos without going out, then the breast ultrasonic videos are uploaded to an artificial intelligent medical platform, and the breast ultrasonic videos of portable instruments are automatically detected by a target detection method. The portable instrument breast ultrasound video comprises three tissues of a fat layer, a gland layer and a muscle layer, and a tumor exists in the gland layer only, so that the interference of the fat layer and the muscle layer easily causes a high false positive rate; in addition, the gland region quality of the portable instrument mammary gland ultrasonic video is generally lower than that of the high-end instrument mammary gland ultrasonic video. The above problems restrict the popularization of portable ultrasound devices.
Disclosure of Invention
The method aims to provide a mammary gland ultrasonic enhancement method based on deep learning to solve the problems of false positive and low accuracy rate in automatic detection of a mammary gland ultrasonic video lesion area of a portable instrument. The method is suitable for the fields of medical treatment and the like, is used for enhancing the detected image, and is beneficial to the popularization of portable ultrasonic equipment.
The purpose of the invention is realized by the following technical means:
the invention discloses a mammary gland ultrasonic enhancement method based on deep learning, which is realized by eliminating the influence of a fat layer and a muscle layer in a mammary gland ultrasonic video and improving the quality of the mammary gland ultrasonic video of a portable instrument. Firstly, a mammary gland ultrasonic video with low image quality acquired by a portable instrument is intercepted into a mammary gland ultrasonic image, and a gland area in the mammary gland ultrasonic video is segmented by UNet. UNet can more effectively use limited marked samples through a data expansion technology, end-to-end training can be carried out by using very little data, and the network speed is very high; the glandular region of the fuzzy breast ultrasound image is then enhanced using a cycle-generated confrontation network (CycleGan). The most important characteristic of the CycleGan is that training data are not required to be paired, and the mapping of images among different domains can be successfully trained only by providing images of different domains, so that the image quality of a gland region in a mammary gland ultrasonic video of the portable equipment can be improved, the imaging level of high-end equipment is achieved, and the detection accuracy of the portable ultrasonic equipment is further improved.
A breast ultrasound video lesion detection method based on deep learning comprises the following steps:
the method comprises the following steps: and (4) segmenting the gland region in the portable instrument mammary gland ultrasonic video.
Step 1.1: a data set is constructed. Firstly, intercepting a mammary gland ultrasonic video collected from a portable instrument frame by frame to obtain a mammary gland ultrasonic image of the portable instrument; secondly, according to the labeling information of the breast ultrasound image of the portable instrument provided by the doctor (the labeling information gives coordinate data of three tissues of fat, gland and muscle on the whole breast ultrasound image), a segmentation map of the breast ultrasound image of the portable instrument is obtained by using MATLAB, and on the segmentation map, the pixel value of a background area is 0, the pixel value of a fat area is 1, the pixel value of a gland area is 2, and the pixel value of a muscle area is 3.
Step 1.2: UNet for segmentation was constructed. Firstly, constructing a coding structure, using a convolution kernel size of 3 multiplied by 3, wherein the number of features extracted by a first layer of convolution network is 16, the number of input image channels is 3, after two continuous convolutions, the number of the feature channels passes through a rectification linear unit and a maximum pooling layer for down sampling, the step length of the maximum pooling layer is 2, the number of the feature channels is doubled in each down sampling step, and the network structure of the convolutional layer, the convolutional layer and the pooling layer is repeated for 5 times; then, a decoding structure is constructed, an up-sampling method is adopted, the size of a convolution kernel is 2 x 2, and then the two 3x3 convolution layers are passed, and each layer is followed by a rectification linear unit; finally, the feature vectors are mapped to the corresponding classes using the 1 × 1 convolutional layer.
Step 1.3: and segmenting the portable instrument mammary gland ultrasonic image by using the trained UNet, and cutting a rectangular boundary frame of the gland region according to the segmentation result to obtain a portable instrument gland layer image.
Step two: and (4) segmenting a gland region in the mammary gland ultrasonic video of the high-end instrument.
Step 2.1: a data set is constructed. Firstly, intercepting a breast ultrasound video acquired from a high-end instrument frame by frame to obtain a breast ultrasound image of the high-end instrument; secondly, according to the labeling information of the breast ultrasound image of the high-end instrument provided by the doctor (the labeling information gives coordinate data of three tissues of fat, gland and muscle on the whole breast ultrasound image), a segmentation map of the breast ultrasound image of the high-end instrument is obtained by using MATLAB, wherein on the segmentation map, the pixel value of a background region is 0, the pixel value of a fat region is 1, the pixel value of a gland region is 2, and the pixel value of a muscle region is 3.
Step 2.2: UNet for segmentation was constructed. Firstly, constructing a coding structure, using a convolution kernel size of 3 multiplied by 3, wherein the number of features extracted by a first layer of convolution network is 16, the number of input image channels is 3, after two continuous convolutions, the number of the feature channels passes through a rectification linear unit and a maximum pooling layer for down sampling, the step length of the maximum pooling layer is 2, the number of the feature channels is doubled in each down sampling step, and the network structure of the convolutional layer, the convolutional layer and the pooling layer is repeated for 5 times; then, a decoding structure is constructed, an up-sampling method is adopted, the size of a convolution kernel is 2 x 2, and then the two 3x3 convolution layers are passed, and each layer is followed by a rectification linear unit; finally, the feature vectors are mapped to the corresponding classes using the 1 × 1 convolutional layer.
Step 2.3: and segmenting the breast ultrasonic image of the high-end instrument by using the trained UNet, and cutting a rectangular boundary frame of a gland region according to a segmentation result to obtain a gland layer image of the high-end instrument.
Step three: enhancing the fuzzy gland layer.
Step 3.1: a data set is constructed. The data of the step is divided into two types: one is the portable instrument glandular layer image obtained in the step one; the other is the high-end instrument glandular layer image obtained in step two. The two types of glandular layer images are adjusted to be uniform in size. Because the gland layer definition of the high-end instrument is higher than that of the portable instrument, the gland layer image of the portable instrument is enhanced by using the gland layer image of the high-end instrument.
Step 3.2: a cycle generation countermeasure network (CycleGan) was constructed for augmentation. Firstly, constructing a generator G to realize the migration from distribution X to distribution Y; secondly, constructing a generator F to realize the migration from the distribution Y to the distribution X; then, a construction discriminator Dx judges whether the data is true X or the data which is generated by the generator F according to Y and has the same distribution with X; then, a construction discriminator Dy judges whether the data is true Y or the data which is generated by the generator G according to the X and is distributed with the Y; then, constructing a loss function of the network, wherein the loss function comprises a countermeasure loss function, a cycle consistency loss function and an Identity mapping loss; and finally, training the circularly generated countermeasure network, updating parameters and storing.
Step 3.3: and (5) enhancing the blurred portable instrument glandular layer by using the trained generator to obtain a final enhancement result.
Advantageous effects
1. The method trains UNet, segments the gland region in the mammary gland ultrasonic video, and reduces the false positive rate caused by the interference of a fat layer and a muscle layer.
2. The method trains a cyclic generation confrontation network model, and the blurred portable instrument glandular layer image can be directly generated into an enhanced image after being input, so that the image quality and the detection result accuracy rate during target detection are improved.
Drawings
FIG. 1 is a schematic diagram of the UNet structure of the present invention;
FIG. 2 is a schematic diagram of a loop generation countermeasure network structure according to the present invention;
FIG. 3 is a schematic flow chart of a deep learning-based breast ultrasound enhancement method and an embodiment of the invention;
FIG. 4 is a portable breast ultrasound image;
FIG. 5 is a portable instrument glandular layer image;
FIG. 6 is a breast ultrasound image of a high-end instrument;
FIG. 7 is an image of the glandular layer of a high-end instrument;
FIG. 8 is an enhanced portable instrument glandular layer image;
FIG. 9 is an enhanced portable instrument glandular layer image with detected lesion areas.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples. The technical problems and the advantages solved by the technical solutions of the present invention are also described, and it should be noted that the described embodiments are only intended to facilitate the understanding of the present invention, and do not have any limiting effect.
The portable instrument is convenient for people to obtain own breast ultrasonic images by hands, but due to the handheld instability, the quality of the breast ultrasonic video of the portable instrument is often inferior to that of the breast ultrasonic video of a high-end instrument, and the problem of high omission ratio can be caused.
In this embodiment, the practical application of the method in reducing the false positive rate and the false negative rate of the target detection result of the breast ultrasound video lesion area of the portable instrument is illustrated by automatically detecting tumors in 142 portable instrument breast ultrasound videos provided by a hospital.
Fig. 3 is a flowchart of a breast ultrasound enhancement method in an embodiment of the present invention, which specifically includes the following steps:
step 1: 142 portable instrument breast ultrasound videos provided by hospitals were selected and the gland regions in the videos were segmented.
Step 1.1: adopting 142 portable instrument mammary gland ultrasonic video screenshots provided by a hospital, generating a corresponding segmentation graph according to a label provided by a doctor by using the portable instrument mammary gland ultrasonic video screenshots and the corresponding segmentation graph as a training set of UNet.
Step 1.2: constructing a coding structure, using a convolution kernel size of 3 multiplied by 3, wherein the number of the features extracted by a first layer of convolution network is 16, the number of input image channels is 3, after two continuous convolutions, the coding structure passes through a rectification linear unit and a maximum pooling layer for down sampling, the step length of the maximum pooling layer is 2, the number of the feature channels is doubled in each down sampling step, and the network structure of the convolution layer, the convolution layer and the pooling layer is repeated for 5 times; then, a decoding structure is constructed, an up-sampling method is adopted, the size of a convolution kernel is 2 x 2, and then the two 3x3 convolution layers are passed, and each layer is followed by a rectification linear unit; finally, the feature vectors are mapped to the corresponding classes using the 1 × 1 convolutional layer.
Step 1.3: inputting the breast ultrasound image of the portable instrument to be segmented into the trained UNet, outputting the segmentation result, and cutting to obtain the glandular layer image of the portable instrument, wherein fig. 5 is the glandular layer image of the portable instrument.
Step 2: 142 high-end instrument breast ultrasound videos provided by hospitals are selected, and gland areas in the videos are segmented.
Step 2.1: adopting 142 high-end instrument mammary gland ultrasonic video screenshots provided by a hospital, generating a corresponding segmentation graph according to doctor labeling by using a portable instrument mammary gland ultrasonic image shown in figure 6, and taking the mammary gland ultrasonic video screenshots and the corresponding segmentation graph as a training set of UNet.
Step 2.2: constructing a coding structure, using a convolution kernel size of 3 multiplied by 3, wherein the number of the features extracted by a first layer of convolution network is 16, the number of input image channels is 3, after two continuous convolutions, the coding structure passes through a rectification linear unit and a maximum pooling layer for down sampling, the step length of the maximum pooling layer is 2, the number of the feature channels is doubled in each down sampling step, and the network structure of the convolution layer, the convolution layer and the pooling layer is repeated for 5 times; then, a decoding structure is constructed, an up-sampling method is adopted, the size of a convolution kernel is 2 x 2, and then the two 3x3 convolution layers are passed, and each layer is followed by a rectification linear unit; finally, the feature vectors are mapped to the corresponding classes using the 1 × 1 convolutional layer.
Step 2.3: inputting the breast ultrasound image of the high-end instrument to be segmented into the trained UNet, outputting the segmentation result, and cutting to obtain the glandular layer image of the high-end instrument, wherein fig. 7 is the glandular layer image of the high-end instrument.
And step 3: enhancing the blurred breast ultrasound image.
Step 3.1: and (3) generating a training set of the countermeasure network by taking the 6906 portable instrument glandular layer images obtained in the step 1 and the 6906 high-end instrument glandular layer images obtained in the step 2 as a cycle.
Step 3.2: constructing a cyclic generation countermeasure network (CycleGan) for augmentation by first constructing a generator G to effect a migration from a distribution X, which is a high-end instrument breast ultrasound image, to a distribution Y, which is a portable instrument breast ultrasound image; secondly, constructing a generator F to realize the migration from the distribution Y to the distribution X; then, a construction discriminator Dx judges whether the data is true X or the data which is generated by the generator F according to Y and has the same distribution with X; then, a construction discriminator Dy judges whether the data is true Y or the data which is generated by the generator G according to the X and is distributed with the Y; then, constructing a loss function of the network, wherein the loss function comprises a countermeasure loss function, a cycle consistency loss function and an Identity mapping loss; and finally, training the circularly generated countermeasure network, updating parameters and storing.
Step 3.3: inputting the portable instrument gland layer image to be enhanced into a trained cycle generation countermeasure network (CycleGan), and outputting an enhancement result close to the quality of the high-end instrument gland layer, wherein the enhancement result excludes the interference of the fat layer and the muscle layer in the original breast ultrasound image, and the quality of the portable instrument gland layer is improved to the quality of the high-end instrument gland layer, and fig. 8 is the enhanced portable instrument gland layer image. It should be noted that the lesion automatic detection accuracy of the breast ultrasound image of the high-end instrument is generally better than that of the breast ultrasound image of the portable instrument.
The test result shows that the method can effectively realize the detection of the lesion area in the breast ultrasound video, and fig. 9 is an enhanced portable instrument glandular layer image in which the lesion area is detected. On 142 portable instrument mammary gland ultrasonic videos provided by doctors, the video missing rate obtained by the traditional lesion detection method is 25%, and the false positive rate is 7.02%; the video missing rate obtained by the method is 14.2%, and the false positive rate is 1.75%. The method really overcomes the defects of high false positive rate and high omission factor of the traditional lesion detection method.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (4)

1. A mammary gland ultrasonic enhancement method based on deep learning is characterized in that: the method is realized by eliminating the influence of a fat layer and a muscle layer in the mammary gland ultrasonic video and improving the quality of the mammary gland ultrasonic video of a portable instrument, and specifically comprises the following steps:
the method comprises the following steps: dividing a gland region in the breast ultrasonic video of the portable instrument;
step two: dividing a gland region in a mammary gland ultrasonic video of a high-end instrument;
step three: enhancing the fuzzy gland layer.
2. The deep learning-based breast ultrasound enhancement method of claim 1, wherein: the implementation method of the first step comprises the following steps:
step 1.1: constructing a data set; firstly, intercepting a mammary gland ultrasonic video collected from a portable instrument frame by frame to obtain a mammary gland ultrasonic image of the portable instrument; secondly, obtaining a segmentation map of the portable instrument mammary gland ultrasonic image by using MATLAB according to labeling information of the portable instrument mammary gland ultrasonic image provided by a doctor, wherein on the segmentation map, the pixel value of a background region is 0, the pixel value of a fat region is 1, the pixel value of a gland region is 2, and the pixel value of a muscle region is 3;
step 1.2: constructing UNet for segmentation; firstly, constructing a coding structure, using a convolution kernel size of 3 multiplied by 3, wherein the number of features extracted by a first layer of convolution network is 16, the number of input image channels is 3, after two continuous convolutions, the number of the feature channels passes through a rectification linear unit and a maximum pooling layer for down sampling, the step length of the maximum pooling layer is 2, the number of the feature channels is doubled in each down sampling step, and the network structure of the convolutional layer, the convolutional layer and the pooling layer is repeated for 5 times; then, a decoding structure is constructed, an up-sampling method is adopted, the size of a convolution kernel is 2 x 2, and then the two 3x3 convolution layers are passed, and each layer is followed by a rectification linear unit; finally, mapping the feature vectors to corresponding classes using a 1 × 1 convolutional layer;
step 1.3: and segmenting the portable instrument mammary gland ultrasonic image by using the trained UNet, and cutting a rectangular boundary frame of the gland region according to the segmentation result to obtain a portable instrument gland layer image.
3. The deep learning-based breast ultrasound enhancement method of claim 1, wherein: the implementation method of the second step is as follows:
step 2.1: constructing a data set; firstly, intercepting a breast ultrasound video acquired from a high-end instrument frame by frame to obtain a breast ultrasound image of the high-end instrument; secondly, according to the labeling information of the breast ultrasound image of the high-end instrument provided by the doctor, obtaining a segmentation map of the breast ultrasound image of the high-end instrument by using MATLAB, wherein on the segmentation map, the pixel value of a background region is 0, the pixel value of a fat region is 1, the pixel value of a gland region is 2, and the pixel value of a muscle region is 3;
step 2.2: constructing UNet for segmentation; firstly, constructing a coding structure, using a convolution kernel size of 3 multiplied by 3, wherein the number of features extracted by a first layer of convolution network is 16, the number of input image channels is 3, after two continuous convolutions, the number of the feature channels passes through a rectification linear unit and a maximum pooling layer for down sampling, the step length of the maximum pooling layer is 2, the number of the feature channels is doubled in each down sampling step, and the network structure of the convolutional layer, the convolutional layer and the pooling layer is repeated for 5 times; then, a decoding structure is constructed, an up-sampling method is adopted, the size of a convolution kernel is 2 x 2, and then the two 3x3 convolution layers are passed, and each layer is followed by a rectification linear unit; finally, mapping the feature vectors to corresponding classes using a 1 × 1 convolutional layer;
step 2.3: and segmenting the breast ultrasonic image of the high-end instrument by using the trained UNet, and cutting a rectangular boundary frame of a gland region according to a segmentation result to obtain a gland layer image of the high-end instrument.
4. The deep learning-based breast ultrasound enhancement method of claim 1, wherein: the third step is realized by the following steps:
step 3.1: constructing a data set; the data of the step is divided into two types: one is the portable instrument glandular layer image obtained in the step one; the other type is the high-end instrument glandular layer image obtained in the step two; adjusting the two types of glandular layer images to be in a uniform size; because the gland layer definition of the high-end instrument is higher than that of the portable instrument, the gland layer image of the portable instrument is enhanced by using the gland layer image of the high-end instrument;
step 3.2: constructing a cycle generation countermeasure network for reinforcement; firstly, constructing a generator G to realize the migration from distribution X to distribution Y; secondly, constructing a generator F to realize the migration from the distribution Y to the distribution X; then, a construction discriminator Dx judges whether the data is true X or the data which is generated by the generator F according to Y and has the same distribution with X; then, a construction discriminator Dy judges whether the data is true Y or the data which is generated by the generator G according to the X and is distributed with the Y; then, constructing a loss function of the network, wherein the loss function comprises a countermeasure loss function, a cycle consistency loss function and an Identity mapping loss; finally, training the circularly generated countermeasure network, updating and storing the parameters;
step 3.3: and (5) enhancing the blurred portable instrument glandular layer by using the trained generator to obtain a final enhancement result.
CN202110967233.9A 2021-08-23 2021-08-23 Deep learning-based breast ultrasound enhancement method Pending CN114140332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110967233.9A CN114140332A (en) 2021-08-23 2021-08-23 Deep learning-based breast ultrasound enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110967233.9A CN114140332A (en) 2021-08-23 2021-08-23 Deep learning-based breast ultrasound enhancement method

Publications (1)

Publication Number Publication Date
CN114140332A true CN114140332A (en) 2022-03-04

Family

ID=80393613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110967233.9A Pending CN114140332A (en) 2021-08-23 2021-08-23 Deep learning-based breast ultrasound enhancement method

Country Status (1)

Country Link
CN (1) CN114140332A (en)

Similar Documents

Publication Publication Date Title
CN109493328B (en) Medical image display method, viewing device and computer device
CN107833231B (en) Medical image display method, apparatus and computer storage medium
WO2020164468A1 (en) Medical image segmentation method, image segmentation method, related device and system
CN112086197B (en) Breast nodule detection method and system based on ultrasonic medicine
JP2004033749A (en) Semiautomatic segmental algorithm of pet (positron emission tomograph) tumor image
US10867375B2 (en) Forecasting images for image processing
CN111127430A (en) Method and device for determining medical image display parameters
CN111091127A (en) Image detection method, network model training method and related device
US11823384B2 (en) CT image generation method for attenuation correction of pet images
CN106530236B (en) Medical image processing method and system
EP3971830B1 (en) Pneumonia sign segmentation method and apparatus, medium and electronic device
CN110211143A (en) A kind of medical image analysis method based on computer vision
CN109146993A (en) A kind of Method of Medical Image Fusion and system
CN108269272B (en) Liver's CT method for registering and system
JP2004174241A (en) Image forming method
CN111210431B (en) Blood vessel segmentation method, device, equipment and storage medium
CN115471470A (en) Esophageal cancer CT image segmentation method
Senthil et al. Enhancement Sushisen algorithms in images analysis technologies to increase computerized tomography images
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
CN106557767A (en) A kind of method of ROI region in determination interventional imaging
CN112967254A (en) Lung disease identification and detection method based on chest CT image
CN114140332A (en) Deep learning-based breast ultrasound enhancement method
JP7265805B2 (en) Image analysis method, image analysis device, image analysis system, control program, recording medium
CN110648333B (en) Real-time segmentation system of mammary gland ultrasonic video image based on middle-intelligence theory
KR20220095401A (en) Apparatus and method for diagnosing facial fracture based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination