CN115147303A - Two-dimensional ultrasonic medical image restoration method based on mask guidance - Google Patents
Two-dimensional ultrasonic medical image restoration method based on mask guidance Download PDFInfo
- Publication number
- CN115147303A CN115147303A CN202210767690.8A CN202210767690A CN115147303A CN 115147303 A CN115147303 A CN 115147303A CN 202210767690 A CN202210767690 A CN 202210767690A CN 115147303 A CN115147303 A CN 115147303A
- Authority
- CN
- China
- Prior art keywords
- image
- mask
- dimensional
- model
- ultrasonic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000002604 ultrasonography Methods 0.000 claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 37
- 230000002611 ovarian Effects 0.000 claims abstract description 28
- 230000008439 repair process Effects 0.000 claims abstract description 17
- 238000012360 testing method Methods 0.000 claims abstract description 16
- 230000011218 segmentation Effects 0.000 claims abstract description 8
- 210000001672 ovary Anatomy 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000005457 optimization Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 11
- 206010028980 Neoplasm Diseases 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 238000002372 labelling Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 238000003759 clinical diagnosis Methods 0.000 claims description 4
- 230000003247 decreasing effect Effects 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 230000001902 propagating effect Effects 0.000 claims description 2
- 238000001228 spectrum Methods 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims 1
- 206010061535 Ovarian neoplasm Diseases 0.000 description 8
- 230000004913 activation Effects 0.000 description 2
- 230000010339 dilation Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 206010011732 Cyst Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 208000031513 cyst Diseases 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 210000004996 female reproductive system Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a mask guide-based two-dimensional ultrasound medical image restoration method, taking two-dimensional ovarian ultrasound image restoration as an example, the method comprises the following steps: collection of two-dimensional ovaries an ultrasound image; generating a binary mask through marking, and establishing a data set; dividing a data set into a training set and a test set; processing the ultrasonic images in the training set into damaged images by generating a random binary mask; and loading the original image and the binary mask in the training set into an image restoration model for training, calculating the loss of the model, transmitting the loss to a network in a reverse mode, adjusting the parameters of the model, and repeating iterative optimization training to obtain the optimal image restoration model. And repairing the damaged ultrasonic image by using the optimal model. Through the steps, the repair of the two-dimensional ultrasonic medical image is realized, and the accuracy of focus identification and segmentation results in the two-dimensional ovarian ultrasonic image is improved.
Description
Technical Field
The invention relates to the technical field of image restoration, in particular to a two-dimensional ultrasonic medical image restoration method based on mask guidance.
Background
Image restoration is an important image processing technology, and the basic principle is to use the information of the intact part in the image to repair the missing or redundant part in the image. It is widely used for preprocessing of images. The conventional image restoration method is mainly based on the texture features of the image. With the rapid development of deep learning technology, image restoration methods based on generation of countermeasure networks are widely used for face and natural image restoration, but are rarely applied in the field of two-dimensional ultrasound medical image restoration. The invention provides a two-dimensional ultrasound medical image repairing method based on mask guidance by taking two-dimensional ovarian tumor ultrasound image repairing as an example.
The ovary belongs to the female reproductive system and is a high-incidence part with pathological changes such as female cyst, tumor and the like. Tumors in the ovarian region have become one of the major diseases seriously threatening the health and safety of women. Early screening, early diagnosis and early treatment of ovarian lesion appear to be of positive importance. As an important basis for screening and diagnosing ovarian lesion, the ultrasonic image has the advantages of convenience, high efficiency, economy, safety and the like. In recent years, artificial intelligence has achieved abundant results in the field of medical images. However, there is still significant room for improvement in the field of ovarian tumor identification and segmentation, subject to the quantity and quality of the ovarian ultrasound database. After the ovarian ultrasound images are processed by the doctor through clinical diagnosis, a large amount of noise signals are introduced into the ultrasound images, such as a small hand symbol for identifying the position of the tumor, a cross symbol for the size of the tumor, a numeric symbol, and a dashed line symbol. These sign noises are mainly distributed in the interior and at the boundary of the ovarian tumor, and can directly influence the accuracy of the identification and segmentation of the ovarian tumor. The existing image restoration method based on the generation countermeasure network has poor effect on the problem of artificially introduced noise removal. Therefore, the mature and reliable ovarian tumor ultrasonic image repairing method has very important significance and practical value, and can play an important role in improving the identification and segmentation precision of ovarian tumors.
Disclosure of Invention
Object of the Invention
The invention aims to provide a mask guide-based two-dimensional ultrasonic medical image repairing method, which solves the technical problem of removing artificially introduced noise in the current two-dimensional ultrasonic medical image and fills the blank in the field of repairing two-dimensional ovarian ultrasonic images at home and abroad.
Technical scheme
The invention relates to a two-dimensional ultrasonic medical image restoration method, which comprises the following steps:
step one, collecting a two-dimensional ovarian ultrasonic image;
generating a one-to-one binary mask (the size of the binary mask is the same as that of the image, 1 represents a damaged area, and 0 represents a sound area) image through labeling, and establishing a two-dimensional ovarian ultrasound image repair data set;
dividing the ovary ultrasonic images and the corresponding binary mask images in the data set into a training set and a test set;
processing the ultrasonic images in the training set into damaged images by generating random binary mask images;
loading the original image, the binary mask image, the randomly generated binary mask image and the damaged image in the training set into an image restoration model together, and restoring the image to obtain a restored image;
step six, calculating the loss between the repaired image and the original image, reversely transmitting the loss to an image repairing model network, adjusting model parameters, and repeatedly carrying out iterative optimization training;
step seven, loading the original image in the test set, the binary mask image in the test set, the randomly generated binary mask and the damaged image into an image restoration model together to obtain an optimal image restoration model;
and step eight, inputting the damaged ultrasonic image and the binary mask image corresponding to the damaged ultrasonic image into an optimal image repairing model for repairing to obtain a repaired two-dimensional ovarian ultrasonic image.
Advantages of the invention
1. The acquired original image is subjected to preliminary labeling and fine processing to generate binary mask images corresponding to the original image one by one, and a first two-dimensional ovarian ultrasound image restoration data set at home and abroad is established.
2. The image restoration model does not need a real background image, and can restore a damaged image to improve the quality of an ultrasonic image under the condition of no real background image.
3. The model can repair the damaged images of the tumor boundary with high quality and improve the accuracy of tumor identification and segmentation.
Drawings
FIG. 1 Overall flow sheet of the invention
FIG. 2 is a flow chart of the present invention for creating a two-dimensional ovarian ultrasound image repair data set.
FIG. 3 is a flow chart of image restoration according to the present invention.
FIG. 4 is a schematic diagram of an image restoration model of the present invention.
Detailed Description
The present invention relates to a mask-guided two-dimensional ultrasound medical image inpainting method, and exemplary embodiments of the present invention will now be described in detail, which are illustrated in the accompanying drawings.
The overall process of the invention is shown in figure 1, and the steps are as follows:
step one, collecting two-dimensional ovarian ultrasound image data.
During the examination of a patient, a doctor operates the ultrasonic equipment, two-dimensional ovarian tumor ultrasonic image data of the patient are collected, and after clinical diagnosis by the doctor, an artificial auxiliary identification symbol is added to the image. And then, carrying out rough marking on the focus area on the ultrasonic image by using Labelme open source marking software by a doctor so as to enable the data to be in an acquirable state. When the data is extracted and stored, all the private information of the patient is erased, and the private information is named by an ID number.
The data naming rules are as follows: two-dimensional ultrasound images are prefixed with US +4 digits and named with _ gt as suffix, such as "US0001_ gt".
And step two, generating one-to-one corresponding binary mask images through labeling, and establishing a two-dimensional ovarian ultrasound image data set, as shown in fig. 2.
As the doctor carries out rough marking on the focus area, manual supplementary marking point operation is needed to be carried out on the marking of the focus area, and the rough edge contour is smoothed. Through clinical diagnosis of doctors, a large number of artificial marks for assisting image recognition are introduced into the ultrasonic image, such as a small hand symbol for marking the position of a tumor, a cross symbol and a dotted line symbol for marking the size of the tumor, a digital symbol for displaying equipment information and the like. These symbols can seriously affect the accuracy of neural networks for ovarian tumor species identification and segmentation. For the symbols, labelme open source software is used for carrying out refined labeling on the 1739 Zhang Erwei ultrasonic image, wherein the symbols of the small hand are labeled as 'fingers', the symbols of the dotted line are labeled as 'dotted lines', the symbols of the cross are labeled as 'cross', the English characters are labeled as 'characters', and the symbols of the equipment information are labeled as 'background'. And for the marked image, processing by using IPython to obtain a binary mask image of the marked region, wherein the value of the marked region is 1, and the values of other regions which are not marked are 0. The naming convention for a binary mask image is as follows: the two-dimensional ultrasound image mask is named with US +4 digits as a prefix and _ mask as a suffix, as follows: the two-dimensional ultrasound image named "US0001_ gt" and its corresponding binary mask image named "US0001_ mask". The dataset consists of labeled 1739 Zhang Erwei ultrasound images and a one-to-one correspondence of binary mask images.
And step three, dividing the ovary ultrasonic images and the corresponding binary mask images in the data set into a training set and a test set, as shown in fig. 3.
Randomly dividing 1739 two-dimensional ultrasonic images and binary masks corresponding to the two-dimensional ultrasonic images into a training set and a test set, wherein the training set comprises 1565 images and corresponding binary mask images, the test set comprises 174 images and corresponding binary mask images, and the training set comprises: test set =9:1.
And step four, processing the ultrasonic images in the training set into damaged images by generating random binary mask images.
In the data loading function part, a binary mask image with random size, random shape and random position is generated according to the size of each image, and the naming rule is as follows: two-dimensional ultrasound image named "US0001_ gt" for which the program generates a binary mask image named "US0001_ mask" generated broken image Img In1 =Img gt ×(1-Mask g )。
Step five, loading the original image in the training set, the binary mask image in the training set, the randomly generated binary mask and the damaged image into an image restoration model, and (5) obtaining a repaired image after image repairing, wherein an image repairing model is shown in fig. 4.
The image restoration model adopts a two-stage generation model and adopts the structure of a coder decoder. The first stage is as follows: the coding process captures the high-level features of the intact parts in the damaged image through the coder, and the decoder repairs the damaged areas in the image by using the high-level features captured by the coder in the decoding process to obtain a first-stage repairing result. The original image, the corresponding binary Mask and the generated binary Mask are processed g Together with the image of 256 × 256 size Resize, the image is converted into a tensor, and then a tensor of 256 × 256 × 5 size is formed by connection and input to the encoder 1. The encoder 1 is composed of six soft gate control convolution layers, and the number of output channels is 32,64,64,128,128,128 in sequence. The invention uses four layers of expansion soft gating convolution layers, the expansion rate is 2, and the activation function is ELU. The method can select features according to the background and the mask, and can also consider semantic segmentation in certain channels. Even deep, gated convolution focuses on learning the masked regions and draws information in separate channels to better generate the repair results. After the expansion convolutional layer, the RGB three-channel image enters a decoder 1, wherein the decoder 1 consists of seven soft-gated convolutional layers, the number of output channels is 128,128,64,64,32,16,3 in sequence, an activation function is Tanh, and the RGB three-channel image Img repaired in the first stage is obtained r1 . And a second stage: and calculating the restoration result of the first stage and the original image to obtain an input image of a second stage: img In2 =img gt ×(1-Mask g )+Img r1 ×Mask g . The encoder section of the second stage consists of two parallel encoders, encoder 2 is the same as encoder 1, a context attention mechanism is introduced in encoder 3, and the context attention layer learns to borrow or copy feature information from the known background patch to generate the missing patch. The matching score of the foreground block and the background block is calculated (as a convolution filter) by using a convolution algorithm, and the attention of each pixel is obtained by applying softmaxAnd scoring, and reconstructing a foreground block by deconvolving the attention score with the background block. This layer consists of three standard convolutional layers, of which softmax scale =10. And finally, fusing the features extracted by the encoder 2 and the encoder 3, and inputting the fused features into the decoder 2. The decoder 2 further repairs the damaged part by using the captured advanced features to obtain an image Img with better repairing effect r2 . In the model, the generator adopts a soft gating convolution kernel and an expansion soft gating convolution kernel, and the discriminator adopts a standard convolution kernel. The soft-gated convolution kernel sizes are 5x5 and 3x3 with step sizes of 1 and 2, respectively. The dilation soft-gated convolution kernel size is 3x3, the step size is 1, and the dilation rate is 2. The standard convolution kernel size is 4x4 with a step size of 2.
And step six, calculating the loss between the repaired image and the original image, reversely transmitting the loss to an image repairing model network, adjusting model parameters, and repeatedly performing iterative optimization training, as shown in fig. 3.
And inputting the repaired image and the original image into a discrimination model of an image repairing model to evaluate the repairing effect. The total loss is defined as:
firstly, in the generation stage, a new pixel-level loss function is used, and the loss function is divided into three parts:
in addition to this, we also use the overall style lossLoss of perceptionTotal loss of variationWhen calculating the losses, the same strategy as that for calculating the pixel-level loss is adopted, namely, the weight of the original damaged area is reduced, and the learning degree of the generated model to the area is reduced.
The Discriminator adopts a Soft Mask-Guided PatchGAN Discriminator, which is improved on the basis of a full convolution spectrum normalization Markov Discriminator, and is Guided by a Mask binary Mask image to strengthen the Discriminator pair (Mask) g -Mask g Mask) area repair image while decreasing the attention to the quality of the Mask area repair image. In computing Loss GAN When the same strategy is adopted, increase (Mask) g -Mask g Mask) region, the weight of the Mask region is decreased. Training a generator by adopting an Adam optimizer, wherein the learning rate is 0.0001, reversely propagating the loss function, and calculating by a derivative chain ruleCalculating the gradient of the loss function to each parameter, updating the parameters according to the gradient, and repeatedly iterating and optimizing the training model.
And step seven, loading the original image in the verification set, the binary mask image in the training set, the randomly generated binary mask image and the damaged image into the image restoration model, verifying the image restoration capability of the whole model, and obtaining the optimal image restoration model, as shown in fig. 3.
Mini-batch is set to 64, the number of training samples read from the dataset at a time. A total of 500000 iterations were performed, with one test per 50000 rounds. The specific algorithm is shown in table 1:
TABLE 1 mask-guided image inpainting Algorithm
Step eight, inputting the damaged ultrasonic image and the binary mask image corresponding to the damaged ultrasonic image into the trained optimal image restoration model for restoration, and obtaining a restored two-dimensional ovarian ultrasonic image, as shown in fig. 3.
The damaged ultrasonic image Img gt Inputting the binary Mask image Mask corresponding to the two-dimensional Mask image Mask into a trained optimal image repairing model for repairing to obtain a repaired two-dimensional ovarian ultrasound image.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are intended as illustrations of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, and the invention is to be accorded the full scope of the claims appended hereto. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (9)
1. A two-dimensional ultrasonic medical image restoration method based on mask guidance is characterized in that:
step one, collecting a two-dimensional ovarian ultrasonic image;
generating one-to-one corresponding binary mask images through labeling, and establishing a two-dimensional ovarian ultrasonic image repair data set;
dividing the two-dimensional ovarian ultrasonic image and the corresponding binary mask image in the two-dimensional ovarian ultrasonic image repairing data set into a training set and a test set;
processing the ultrasonic images in the training set into damaged images by generating random binary mask images;
loading the original image in the training set, the binary mask image in the training set, the randomly generated binary mask image and the damaged image into an image restoration model, and restoring the image to obtain a restored image;
step six, calculating the loss between the repaired image and the original image, reversely transmitting the loss to an image repairing model network, adjusting model parameters, and repeatedly carrying out iterative optimization training;
step seven, loading the original image in the test set, the binary mask image in the training set, the binary mask generated randomly and the damaged image into an image restoration model, and verifying the image restoration capability of the whole model to obtain an optimal image restoration model;
and step eight, inputting the damaged ultrasonic image and the binary mask image corresponding to the damaged ultrasonic image into the trained optimal image restoration model for restoration, and obtaining a restored two-dimensional ovarian ultrasonic image.
Through the steps, a two-dimensional ovary ultrasonic image repairing database is established, a two-stage image repairing model is established and trained, and the repairing of a noise image influencing the recognition and segmentation results in the ovary ultrasonic image is completed.
2. The mask guidance-based two-dimensional ultrasound medical image inpainting method according to claim 1, characterized in that:
the "set up of a two-dimensional ovarian ultrasound image database" described in "step one" is performed as follows: in the process of patient examination, the hospital ultrasonic equipment is used for collecting ultrasonic two-dimensional image data of a patient, and the data is in an acquirable state after being diagnosed by a doctor.
3. The mask guidance-based two-dimensional ultrasound medical image inpainting method according to claim 1, characterized in that:
in the step two, acquiring two-dimensional ovarian ultrasound images, generating one-to-one corresponding binary mask images through labeling, and establishing a two-dimensional ovarian ultrasound image data set, wherein the method comprises the following steps: after the doctor processes the ovarian ultrasound image through clinical diagnosis, a large number of artificial marks for assisting the image recognition, such as a hand mark for marking the tumor position, a cross mark and a dotted line mark for marking the tumor size, and a digital mark for displaying device information, are introduced into the ultrasound image.
4. The mask guidance-based two-dimensional ultrasound medical image inpainting method according to claim 1, characterized in that:
the "dividing the two-dimensional ovarian ultrasound image and the corresponding binary mask image in the data set into a training set and a test set" described in "step three" is performed as follows: randomly dividing the two-dimensional ultrasonic image and the binary mask corresponding to the two-dimensional ultrasonic image into a training set and a test set, wherein the training set comprises the following steps: test set = n:1,n takes a natural number between 5 and 10.
5. The mask guidance-based two-dimensional ultrasound medical image inpainting method according to claim 1, wherein:
in step four, the ultrasound image in the training set is processed into a broken image by generating a random binary mask image, as follows: in the data loading function module, a binary mask with random size, random shape and random position is generated according to the size of each imageImage Mask g Generating a damaged image Img In1 =Img gt ×(1-Mask g )。
6. The mask guidance-based two-dimensional ultrasound medical image inpainting method according to claim 1, characterized in that:
in the step five, the original image in the training set, the binary mask image in the training set, the randomly generated binary mask image, and the damaged image are loaded into the image restoration model together, and the restored image is obtained after image restoration, which includes the following steps: a two-stage generative model is used, both stages using the structure of the encoder and decoder. In the first stage, the encoder 1 captures the high-level features of the intact part in the damaged image in the encoding process, and the decoder 1 repairs the damaged area in the image by using the high-level features captured by the encoder 1 in the decoding process to obtain the repairing result in the first stage. In the second stage, the restoration result of the first stage and the original image are calculated to obtain an input image of the second stage: img In2 =img gt ×(1-Mask g )+Img r1 ×Mask g . The encoder part of the second stage consists of two parallel encoders, encoder 2 and encoder 3, a context attention mechanism is introduced in encoder 3, and the context attention layer learns to borrow or copy feature information from the known background patch to generate the missing patch. A convolution algorithm is used to calculate the matching score of the foreground block to the background block (as a convolution filter). Then softmax is applied for comparison to get the attention score for each pixel. And finally, reconstructing a foreground block by using the background block through deconvolution of the attention score. This layer consists of three standard convolutional layers, of which softmax scale =10. Finally, the two encoders are subjected to feature fusion. The decoder 2 further repairs the damaged part by using the captured advanced features to obtain an image Img with better repairing effect r2 。
7. The mask guidance-based two-dimensional ultrasound medical image inpainting method according to claim 1, characterized in that:
in step six, the loss between the restored image and the original image is calculated and is reversely transmitted to an image restoration model network, the model parameters are adjusted, and the optimization training is repeated and iterated, which comprises the following steps: and inputting the repaired image and the original image into a discrimination model of an image repairing model to evaluate the repairing effect. Firstly, in the generation stage, a pixel-level loss function is used, and when pixels and loss are calculated, the pixels and the loss are divided into three parts:pixel loss indicating Mask portion,Represents Mask g (removing the region overlapping with the Mask) portion of the pixel loss,Indicates removal of Mask and Mask g The pixels of the area portion are lost. In addition to this, use is made of the overall style lossLoss of perceptionTotal loss of variationWhen the losses are calculated, the same strategy as that for calculating the pixel-level loss is adopted, namely the weight of a Mask region is reduced, and the learning degree of a generated model to the region is reduced.
The total loss is defined as:
the Discriminator adopts a Soft Mask-Guided PatchGAN Discriminator, which is improved on the basis of a full convolution spectrum normalization Markov Discriminator, and is Guided by a Mask binary Mask image to strengthen the Discriminator pair (Mask) g -Mask g Mask) region repair image quality concerns while reducing the quality concerns of the Mask region repair image. In computing Loss GAN When the same strategy is adopted, increase (Mask) g -Mask g Mask) region weight is decreased. And (3) adopting an Adam optimizer to train a generator, reversely propagating the calculated loss function, calculating the gradient of the loss function to each parameter through a derivative chain rule, updating the parameter according to the gradient, and repeatedly iterating and optimizing the training model.
8. The mask guidance-based two-dimensional ultrasound medical image inpainting method according to claim 1, characterized in that:
loading the original image in the test set, the binary mask image in the training set, the randomly generated binary mask and the damaged image into the image restoration model, verifying the image restoration capability of the whole model and obtaining the optimal image restoration model in the step seven, wherein the method comprises the following steps: mini-batch is set to 64, the number of training samples read from the dataset at a time, multiple loads. A total of 500000 iterations were performed, with one test per 50000 rounds.
9. A mask guidance-based two-dimensional ultrasound medical image inpainting method according to claim 1, the method is characterized in that:
inputting the damaged ultrasound image and the binary mask image corresponding to the damaged ultrasound image into the trained optimal image repair model for repair to obtain a repaired two-dimensional ultrasound image of the ovary in the step eight, wherein the method comprises the following steps: the damaged ultrasonic image Img gt Inputting the binary Mask image Mask corresponding to the binary Mask image Mask into a trained optimal image restoration model for restoration, and obtaining a restored two-dimensional ovarian ultrasound image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210767690.8A CN115147303A (en) | 2022-06-30 | 2022-06-30 | Two-dimensional ultrasonic medical image restoration method based on mask guidance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210767690.8A CN115147303A (en) | 2022-06-30 | 2022-06-30 | Two-dimensional ultrasonic medical image restoration method based on mask guidance |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115147303A true CN115147303A (en) | 2022-10-04 |
Family
ID=83409399
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210767690.8A Pending CN115147303A (en) | 2022-06-30 | 2022-06-30 | Two-dimensional ultrasonic medical image restoration method based on mask guidance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115147303A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116486196A (en) * | 2023-03-17 | 2023-07-25 | 哈尔滨工业大学(深圳) | Focus segmentation model training method, focus segmentation method and apparatus |
-
2022
- 2022-06-30 CN CN202210767690.8A patent/CN115147303A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116486196A (en) * | 2023-03-17 | 2023-07-25 | 哈尔滨工业大学(深圳) | Focus segmentation model training method, focus segmentation method and apparatus |
CN116486196B (en) * | 2023-03-17 | 2024-01-23 | 哈尔滨工业大学(深圳) | Focus segmentation model training method, focus segmentation method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108428229B (en) | Lung texture recognition method based on appearance and geometric features extracted by deep neural network | |
CN110265141B (en) | Computer-aided diagnosis method for liver tumor CT image | |
CN111882560B (en) | Lung parenchyma CT image segmentation method based on weighted full convolution neural network | |
CN112258488A (en) | Medical image focus segmentation method | |
CN112132833A (en) | Skin disease image focus segmentation method based on deep convolutional neural network | |
CN113674253A (en) | Rectal cancer CT image automatic segmentation method based on U-transducer | |
CN113223005B (en) | Thyroid nodule automatic segmentation and grading intelligent system | |
CN113610859B (en) | Automatic thyroid nodule segmentation method based on ultrasonic image | |
Popescu et al. | Retinal blood vessel segmentation using pix2pix gan | |
CN112927237A (en) | Honeycomb lung focus segmentation method based on improved SCB-Unet network | |
CN115294075A (en) | OCTA image retinal vessel segmentation method based on attention mechanism | |
CN113344933B (en) | Glandular cell segmentation method based on multi-level feature fusion network | |
CN115661029A (en) | Pulmonary nodule detection and identification system based on YOLOv5 | |
CN116228785A (en) | Pneumonia CT image segmentation method based on improved Unet network | |
CN115147303A (en) | Two-dimensional ultrasonic medical image restoration method based on mask guidance | |
CN116758336A (en) | Medical image intelligent analysis system based on artificial intelligence | |
WO2024104035A1 (en) | Long short-term memory self-attention model-based three-dimensional medical image segmentation method and system | |
CN116778164A (en) | Semantic segmentation method for improving deep V < 3+ > network based on multi-scale structure | |
CN116823868A (en) | Melanin tumor image segmentation method | |
CN114612478B (en) | Female pelvic cavity MRI automatic sketching system based on deep learning | |
CN115965641A (en) | Pharyngeal image segmentation and positioning method based on deplapv 3+ network | |
CN116469103A (en) | Automatic labeling method for medical image segmentation data | |
CN115526898A (en) | Medical image segmentation method | |
CN113192076B (en) | MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction | |
CN115880554A (en) | Knowledge distillation and interpretable multi-modal medical image fusion model and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |