CN109859184B - Real-time detection and decision fusion method for continuously scanning breast ultrasound image - Google Patents
Real-time detection and decision fusion method for continuously scanning breast ultrasound image Download PDFInfo
- Publication number
- CN109859184B CN109859184B CN201910088044.7A CN201910088044A CN109859184B CN 109859184 B CN109859184 B CN 109859184B CN 201910088044 A CN201910088044 A CN 201910088044A CN 109859184 B CN109859184 B CN 109859184B
- Authority
- CN
- China
- Prior art keywords
- image
- images
- area
- breast
- labeling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Ultra Sonic Daignosis Equipment (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a real-time detection and decision fusion method for continuously scanning mammary gland ultrasonic images, and belongs to the technical field of image segmentation and fusion. The real-time detection and decision fusion method specifically comprises the following steps: firstly, a corresponding prediction model is obtained by building and training a fusion network for automatically segmenting and classifying lesion areas; then embedding the prediction model into a cloud end to carry out real-time detection on lesion areas of each frame of image in the continuously scanned images; then, obtaining accurate detection and classification results of cases corresponding to the continuously scanned images through a decision algorithm; and finally, carrying out spatial synthesis on the detection result obtained by scanning the continuously scanned mammary gland region by an image fusion and splicing method to generate an integral three-dimensional prediction image result of the mammary gland region, and feeding the result back to a doctor on line. The method realizes the visual output of the whole three-dimensional prediction image result of the mammary gland region, improves the prediction accuracy and speed of segmentation and classification of the lesion region and the like, and is convenient to embed hardware equipment or a cloud.
Description
Technical Field
The invention relates to a real-time detection and decision fusion method for continuously scanning mammary gland ultrasonic images, belonging to the technical field of image segmentation and fusion.
Background
In recent decades, breast cancer has become a major public health problem in the current society, and it has become a common consensus that breast cancer becomes a key disease screening. The clinical imaging diagnosis method of breast lesions mainly comprises the following steps: x-ray photography, ultrasound examination, and MRI (magnetic resonance examination). Wherein, ultrasonic examination has no damage to human body, thus becoming the main means for screening mastopathy in large scale. The mammary gland lesion is divided into benign lesion and malignant lesion, wherein the benign lesion comprises diseases such as mastitis, hyperplasia of mammary glands, mammary gland fibroma, galactocele and the like; malignant diseases include breast cancer, ductal cancer, etc. The diagnosis based on the ultrasonic image is mostly realized by a professional gynecologist through visual observation, and the process is time-consuming and labor-consuming, so that the automatic segmentation and classification algorithm of the lesion area can be completed, and the method has a great application prospect.
Deep learning is a method of machine learning based on the characterization learning of data, and the concept was proposed by Hinton et al in 2006. An unsupervised greed layer-by-layer training algorithm based on a DBN (deep belief network) provided by Hinton et al brings hope for solving the optimization problem related to a deep structure; the CNN (convolutional neural network) proposed by Lecun et al hereafter uses the spatial relative relationship to reduce the number of parameters and improve the deep network training performance. The R-CNN (convolutional neural network based on regional characteristics) based on CNN is widely applied to the field of object detection segmentation and classification application in images. With the rapid development of the field of deep learning algorithms, the RCNN series includes different convolutional network structures such as RCNN, SppNET (convolutional neural network based on spatial pyramid pooling), Fast-RCNN (convolutional neural network based on Fast region feature extraction), Fast-RCNN (convolutional neural network based on accelerated region feature extraction), Mask-RCNN (convolutional neural network based on semantic segmentation of region features), and the like.
Moihonon Yap et al apply LeNet to classify benign and malignant breast lesions, but the algorithm has low accuracy in automatic detection of lesion regions, cannot realize lesion region segmentation, and depends on the feature extraction result of an ultrasonic image, and needs to master certain medical background knowledge; in addition, SeungYeon Shin and Zhongkui et al apply fast-RCNN to the automatic detection and classification of breast lesion regions, and can simultaneously realize image detection and classification tasks based on the fast-RCNN network without medical background knowledge, but the method needs to be improved in the accuracy of lesion region segmentation and benign and malignant classification, and cannot segment lesion regions. Therefore, accurate detection and segmentation and benign and malignant classification of the lesion region in the breast ultrasound image are performed without professional medical background knowledge, and the accuracy of computer-aided diagnosis and the clinical application thereof are greatly improved.
Disclosure of Invention
The invention provides a real-time detection and decision fusion method for continuously scanning a breast ultrasonic image, aiming at the technical defects that the existing breast ultrasonic image scanner cannot detect a lesion area in real time and assist decision diagnosis, the lesion area in the breast ultrasonic image is automatically detected, segmented and classified, an accurate detection and classification result is obtained through a decision algorithm, and finally, an integral three-dimensional prediction image result of the breast area is generated through an image fusion and splicing method.
The real-time detection and decision fusion method for continuously scanning the breast ultrasound image comprises the following steps:
the method comprises the following steps: the method comprises the following steps of (1) making a training data set for automatically detecting a lesion area, specifically: collecting a plurality of breast ultrasonic images, and automatically detecting the marked areas in the breast ultrasonic images to generate corresponding marked area template images; based on the template image of the marked area, automatically matching and filling the marked area by using a matching method to generate an automatically de-marked breast ultrasonic image data set; and finally, carrying out image preprocessing on the breast ultrasonic image data set with the automatic label removal function to generate a training set for automatically detecting the lesion area, wherein the method specifically comprises the following substeps:
step 1.1, collecting a plurality of breast ultrasonic images;
wherein, the collected multiple breast ultrasonic images comprise images with labels; in addition, the plurality of breast ultrasound images acquired include breast ultrasound images acquired from different instruments;
step 1.2, generating various labeling templates for the image with the label collected in the step 1.1; each labeling template is a template image corresponding to a certain label;
step 1.3, generating image sub-regions with the size consistent with that of the labeling template by respectively taking each pixel of the breast ultrasound image as a center for each labeling template generated in the step 1.2, and calculating the correlation degree r of the image sub-regions and the labeling templates, wherein the calculation formula is as follows:
wherein A ismnAs a matrix of image sub-regions, BmnLabeling a template matrix; the image subregion matrix is pixel values in the image subregion, and the labeled template matrix is pixel values in the labeled template; m is the width of the image subregion, and n is the height of the image subregion;is AmnThe average value of (a) of (b),is BmnThe calculation formula is as follows:
step 1.4, setting a corresponding threshold value for each labeling template, and judging that a label exists in an image subregion when the correlation degree r of the image subregion and the labeling template is greater than the corresponding threshold value;
step 1.5, traversing the breast ultrasound image by a sliding window method, and detecting whether a label exists in an image sub-region taking each pixel as a center in the image; if the label exists, recording the label type corresponding to the image sub-area and the corresponding correlation degree;
step 1.6, if a certain image subregion in the breast ultrasound image corresponds to a plurality of labels, comparing the correlation degrees of different labels, and setting the label type corresponding to the maximum correlation degree as the corresponding label template type in the image subregion; generating an annotated region template image with the size consistent with that of the breast ultrasound image through the steps 1.1 to 1.6;
the template image of the labeling area contains the pixel value in the labeling area which is zero, and the pixel value in the non-labeling area is the same as the pixel value at the position corresponding to the breast ultrasound image;
step 1.7, based on the labeled area template image generated in the step 1.6, automatically matching and filling labeled areas in the labeled area template image by using a matching method to generate a labeled mammary gland ultrasonic image training set; calculating the number of labeling areas in the labeling area template image, setting the number of the labels as k, and setting an initialization labeling cycle count value i as 1;
step 1.8, calculating the total number of pixel points in the ith labeling area in the template image of the labeling area, and setting the total number of the pixel points as Si(ii) a For the ith labeling area, numbering each pixel point in the labeling area from the most marginal pixel of the labeling area clockwise and inwards in sequence; initialization of a deviceNumbering a cycle count value j equal to 1;
step 1.9, setting a point p to represent a jth pixel point in an ith labeling area; the new gray value corresponding to the point p is I (p), and the calculation formula is as follows:
wherein, B(p) is a neighborhood of a pixel point p, and is a radius of the neighborhood, a point q is a pixel in the neighborhood of the point p, p-q is a distance from the point q to the point p, v I (p) is a gradient value of the point p, and ω (p, q) is a weight function between the point q and the point p, and is used for limiting the contribution of each pixel in the neighborhood, and the calculation formula is as follows:
ω(p,q)=dir(p,q)·dst(p,q)·lev(p,q),
wherein the geometric distance factorOrientation factorLevel set distance factorT (p) is a distance vector from a point p to the edge of the region, T (q) is a distance vector from a point q to the edge of the region, | | p-q | | luminance2Is the square of the straight line distance from the point p to the point q, | T (p) -T (q) | is the absolute value of the length of the vector T (p) -T (q), and N (p) is the normal vector direction corresponding to the p neighborhood;
step 1.10, judge number cycle count value j reach Si(ii) a If yes, jumping to step 1.11; if not, making j equal to j +1, and jumping to step 1.9;
step 1.11, judging whether a labeling cycle count value i reaches k; if so, obtaining the breast ultrasound image data set after automatic label removal; if not, making i equal to i +1, and jumping to the step 1.8;
step 1.12, preprocessing all images in the breast ultrasound image data set obtained in step 1.11 after automatic de-labeling; automatically intercepting and retaining breast ultrasonic images containing breast information areas aiming at the breast ultrasonic images acquired from different instruments, and storing the images into a breast ultrasonic image data set after automatic interception;
step 1.13, adjusting all images in the breast ultrasound image data set obtained by the step 1.12 after the automatic interception to the same size to obtain a breast ultrasound image data set after the size adjustment;
step 1.14, the breast ultrasound image data set obtained in step 1.13 after the size adjustment contains RGB red, green and blue three-channel color images, the RGB red, green and blue three-channel color images are converted into gray level images, and the converted gray level images form a training set for automatic detection of lesion areas; the calculation formula is as follows:
Graymn=α·Rmn+β·Gmn+γ·Bmn,
wherein R ismnFor red channel information in color images, GmnFor green channel information in color images, BmnFor blue channel information in color images, GraymnIs a gray image, m is the width of the gray image, n is the height of the gray image, alpha is a constant and the value range is 0.25<α<0.3, beta is a constant and has a value range of 0.55<β<0.6, γ is a constant and satisfies γ ═ 1- α - β;
step two: building, initializing and training a fusion network for automatic segmentation and classification of lesion areas to obtain a trained prediction model for automatic segmentation and classification of lesion areas, and specifically comprising the following substeps:
step 2.1, building and initializing a fusion network for automatically detecting and classifying lesion areas;
wherein, the fusion network for automatically detecting and classifying the lesion area is called the fusion network for short; the fusion network comprises a feature extraction stage and an example segmentation and classification stage, wherein the feature extraction stage comprises a convolutional layer and a region proposal network, and the example segmentation and classification stage comprises a convolutional layer and a full connection layer;
2.2, randomly dividing the training set for automatically detecting the lesion area obtained in the step one into a training set and a verification set according to a proportion;
wherein, the training set for automatically detecting the lesion area is randomly divided into the training set and the verification set according to the proportion ranging from 2:1 to 9: 1;
2.3, inputting the training set into the fusion network, training the fusion network based on a back propagation algorithm and updating a network model; inputting the verification set into the fusion network after completing the network training each time for verifying the accuracy of the segmentation and classification of the fusion network; after k times of network training, a trained prediction model which can be used for a lesion area automatic segmentation and classification fusion network can be obtained, and the prediction model is called as a prediction model for short;
wherein k is an integer and has a value range of 0< k < ∞.
Step three: embedding the prediction model into a cloud end, namely building a fusion network at the cloud end, and carrying out real-time detection on lesion areas of all frames of breast ultrasound images of continuously scanned breast ultrasound images, wherein the method specifically comprises the following substeps:
step 3.1, building a fusion network at the cloud, and loading the prediction model obtained in the step two;
3.2, uploading the continuously scanned breast ultrasound images to a cloud end;
the continuous scanning mammary gland ultrasonic image is a continuous scanning image of the whole mammary gland area to be predicted, which is acquired by an ultrasonic instrument;
3.3, reading each frame of breast ultrasound image in the continuously scanned breast ultrasound images, and carrying out real-time detection on a lesion area of each frame of image by using a prediction model to obtain a lesion area segmentation and classification result of each frame of breast ultrasound image;
step four: defining and training a decision algorithm and obtaining corresponding algorithm parameters;
the decision algorithm is that a plurality of decision tree models are constructed according to a given training data set, each decision tree can predict an instance, and finally, a final accurate prediction result is obtained through a voting method;
step five: predicting the accurate detection and classification result of the corresponding case of the continuously scanned breast ultrasound image through a decision algorithm, and specifically comprising the following substeps:
step 5.1, reading the lesion area segmentation and classification result of each frame of breast ultrasound image obtained in the step 3.3;
step 5.2, according to the spatial position relation of the scanning area of each frame of breast ultrasound image, taking the lesion area segmentation and classification result of each frame of breast ultrasound image read in the step 5.1 as the input of a decision algorithm to obtain the accurate detection and classification result of the corresponding case;
step six: through an image fusion and splicing method, the continuous scanning mammary gland ultrasonic image is subjected to space synthesis to generate an integral three-dimensional prediction image result of a mammary gland region, and the method specifically comprises the following substeps:
6.1, the whole mammary gland area is of a three-dimensional structure, and the continuous scanning mammary gland ultrasonic images are synthesized by multi-frame mammary gland ultrasonic images in time; thus for the entire breast area, the successive breast ultrasound images are ordered according to the scanning direction;
6.2, spatially synthesizing the continuously scanned breast ultrasound images through an image fusion and splicing method to obtain a three-dimensional image of the whole breast area;
the image fusion and splicing method is a method for splicing a plurality of images with overlapped parts, which are acquired by a multi-source channel, into a seamless high-resolution image;
6.3, adjusting the accurate detection result of the case obtained in the fifth step through projective transformation to obtain a three-dimensional coordinate corresponding to the lesion area;
6.4, visually outputting the whole three-dimensional prediction image result of the mammary gland region;
step seven: and feeding back the three-dimensional prediction image result to doctors and experts in related fields on line so as to provide timely consultation and treatment for patients.
Advantageous effects
Compared with the existing breast ultrasonic disease detection method, the real-time detection and decision fusion method for continuously scanning the breast ultrasonic image has the following beneficial effects:
1. the method adopts the fusion network to automatically detect, segment and classify the lesion areas in the breast ultrasound image, improves the accuracy and speed of segmentation and classification of the lesion areas, and enables the lesion areas to be embedded in hardware equipment or a cloud end more conveniently;
2. the invention provides accurate prediction of a single breast ultrasound image case based on a decision algorithm;
3. the invention provides a space fusion method for continuously scanning mammary gland ultrasonic images based on an image fusion and splicing method, which realizes the visual output of the whole three-dimensional prediction image result of a mammary gland region;
4. the method has certain application value and commercial value, and can also be applied to clinical scientific research and clinical diagnosis;
5. the method can be embedded into ultrasonic instrument equipment or a cloud computing center, and automatically segmenting and classifying and predicting the lesion area in real time and dynamically;
6. the method of the invention can reduce the workload of doctors and improve the speed, efficiency and precision of diagnosis.
Drawings
FIG. 1 is a schematic flow chart of a method for fusion of real-time detection and decision of a continuous-scan breast ultrasound image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image with labels according to an embodiment of the present invention.
Detailed Description
The following describes in detail embodiments of the method of the present invention with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Example 1
Fig. 1 is a flowchart of a method for real-time detection and decision fusion of a continuous-scan breast ultrasound image according to an embodiment of the present invention, which specifically includes the following steps:
step A: the method comprises the following steps of (1) making a training data set for automatically detecting a lesion area, specifically: collecting a plurality of breast ultrasonic images, and automatically detecting the marked areas in the breast ultrasonic images to generate corresponding marked area template images; based on the template image of the marked area, automatically matching and filling the marked area by using a matching method to generate a breast ultrasound image data set without marks; and finally, preprocessing the annotated breast ultrasonic image data set to generate a training set for automatically detecting the lesion area.
A.1, collecting a plurality of breast ultrasonic images; wherein, the acquired multiple breast ultrasound images comprise images with labels, and the images with labels are shown in fig. 2; in addition, the plurality of breast ultrasound images acquired include breast ultrasound images acquired from different instruments;
step A.2, generating various labeling templates for the image with the label collected in the step A.1; each labeling template is a template image corresponding to a certain label;
step A.3, aiming at each marking template generated in the step A.2, generating an image sub-region with the size consistent with that of the marking template by respectively taking each pixel of the breast ultrasound image as a center, and calculating the correlation degree r of the image sub-region and the marking template, wherein the calculation formula is as follows:
wherein A ismnAs a matrix of image sub-regions, BmnLabeling a template matrix; the image subregion matrix is pixel values in the image subregion, and the labeled template matrix is pixel values in the labeled template; m is the width of the image subregion, and n is the height of the image subregion;is AmnThe average value of (a) of (b),is BmnThe calculation formula is as follows:
step A.4, setting a corresponding threshold value for each labeling template, and judging that the label exists in the image subregion when the correlation degree r of the image subregion and the labeling template is greater than the corresponding threshold value;
step A.5, traversing the breast ultrasound image by a sliding window method, and detecting whether marks exist in image sub-regions taking each pixel as a center in the image; if the label exists, recording the label type corresponding to the image sub-area and the corresponding correlation degree;
step A.6, if a certain image subregion in the breast ultrasound image corresponds to a plurality of labels, comparing the correlation degrees of different labels, and setting the label type corresponding to the maximum correlation degree as the corresponding label template type in the image subregion; generating an annotated region template image with the size consistent with that of the breast ultrasound image through the steps A.1 to A.6;
the template image of the labeling area contains the pixel value in the labeling area which is zero, and the pixel value in the non-labeling area is the same as the pixel value at the position corresponding to the breast ultrasound image;
step A.7, based on the labeled region template image generated in the step A.6, automatically matching and filling labeled regions in the labeled region template image by using a matching method to generate a labeled mammary gland ultrasonic image training set; calculating the number of labeling areas in the labeling area template image, setting the number of the labels as k, and setting an initialization labeling cycle count value i as 1;
step A.8, calculating the total number of pixel points in the ith labeling area in the template image of the labeling area, and setting the total number of the pixel points as Si(ii) a For the ith labeling area, numbering each pixel point in the labeling area from the most marginal pixel of the labeling area clockwise and inwards in sequence; setting an initialization numbering cycle count value j to 1;
a.9, setting a point p to represent a jth pixel point in an ith labeling area; the new gray value corresponding to the point p is I (p), and the calculation formula is as follows:
wherein, B(p) is a neighborhood of a pixel point p, and is a radius of the neighborhood, a point q is a pixel in the neighborhood of the point p, p-q is a distance from the point q to the point p, v I (p) is a gradient value of the point p, and ω (p, q) is a weight function between the point q and the point p, and is used for limiting the contribution of each pixel in the neighborhood, and the calculation formula is as follows:
ω(p,q)=dir(p,q)·dst(p,q)·lev(p,q),
wherein the geometric distance factorOrientation factorLevel set distance factorT (p) is a distance vector from a point p to the edge of the region, T (q) is a distance vector from a point q to the edge of the region, | | p-q | | luminance2Is the square of the straight line distance from the point p to the point q, | T (p) -T (q) | is the absolute value of the length of the vector T (p) -T (q), and N (p) is the normal vector direction corresponding to the p neighborhood;
step A.10, judging whether the serial number cycle count value j reaches Si(ii) a If yes, jumping to the step A.11; if not, making j equal to j +1, and jumping to the step A.9;
a.11, judging whether a labeling cycle count value i reaches k; if so, obtaining the breast ultrasound image data set after automatic label removal; if not, making i equal to i +1, and jumping to the step A.8;
step A.12, preprocessing all images in the breast ultrasound image data set obtained in the step A.11 after automatic de-labeling; automatically intercepting and retaining breast ultrasonic images containing breast information areas aiming at the breast ultrasonic images acquired from different instruments, and storing the images into a breast ultrasonic image data set after automatic interception;
step A.13, adjusting all images in the breast ultrasonic image data set obtained by the step A.12 after automatic interception to the same size to obtain a breast ultrasonic image data set after size adjustment;
a.14, the breast ultrasound image data set obtained in the step A.13 after the size adjustment contains RGB red, green and blue three-channel color images, the RGB red, green and blue three-channel color images are converted into gray level images, and the converted gray level images form a training set for automatically detecting the lesion area; the calculation formula is as follows:
Graymn=α·Rmn+β·Gmn+γ·Bmn,
wherein, BmnFor red channel information in color images, GmnFor green channel information in color images, BmnFor blue channel information in color images, GraymnIs a gray image, m is the width of the gray image, n is the height of the gray image, alpha is a constant and the value range is 0.25<α<0.3, beta is a constant and has a value range of 0.55<β<0.6, γ is a constant and satisfies γ ═ 1- α - β;
at this point, a training set for automatic detection of lesion regions is generated.
And B: and training a fusion network Mask RCNN for automatically segmenting and classifying the lesion area to obtain a well-trained prediction model which can be used for automatically segmenting and classifying the lesion area.
Step B.1, building and initializing a fusion network, namely Mask RCNN, of automatic detection and classification of a lesion region, wherein a ResNet (residual error network) structure is used in a feature extraction stage in the Mask RCNN, and a ROIAlign layer, a rolling machine layer and a full connection layer are used in an example segmentation and classification stage; in this example, pre-training parameters obtained on a coco (common Objects in Context dataset) dataset are used as an initialization parameter model of the Mask RCNN network for automatic segmentation of lesion areas;
b.2, randomly dividing the training set of the automatic detection of the lesion area obtained in the step A into a training set and a verification set according to the proportion of 8: 1;
step B.3, inputting the training set into a Mask RCNN network, and obtaining a prediction model after 60 times of network training and parameter updating; and validating the validation set using the predictive model.
And C: and (4) building a fusion network at the cloud end, and carrying out real-time detection on the lesion area of each frame of mammary gland ultrasonic image of the continuously scanned mammary gland ultrasonic images.
C.1, building a fusion network at the cloud end, and loading the prediction model obtained in the step B;
c.2, uploading the continuously scanned breast ultrasound images to a cloud end;
c.3, reading each frame of breast ultrasound image in the continuously scanned breast ultrasound images, and carrying out real-time detection on a lesion area of each frame of image by using a prediction model to obtain a lesion area segmentation and classification result of each frame of breast ultrasound image; the Mask RCNN fusion network can realize the detection speed of more than 3 frames per second, and the accuracy rate of segmentation and classification of lesion areas reaches more than 90 percent, compared with the existing method, the detection speed and precision are greatly improved, and a doctor is assisted to carry out rapid diagnosis, so that the diagnosis efficiency is improved.
Step D: and defining and training a decision algorithm and obtaining corresponding algorithm parameters.
D.1, defining a decision algorithm, namely an XGBOST algorithm;
d.2, preprocessing the training set of the automatic detection of the lesion area obtained in the step A, and calculating the truth values of the lesion area and the lesion type of each case in the training set of the automatic detection of the lesion area, wherein the obtained result is the training set of the decision algorithm;
and D.3, inputting the training set of the decision algorithm into the XGB OST algorithm and training the XGB OST algorithm to obtain XGB OST algorithm parameters.
Step E: and predicting the accurate detection and classification result of the corresponding case of the continuously scanned breast ultrasound image through a decision algorithm.
E.1, reading the segmentation and classification result of the lesion area of each frame of breast ultrasound image obtained in the step C.3;
and E.2, according to the spatial position relation of the scanning area of each frame of breast ultrasonic image, taking the segmentation and classification result of the lesion area of each frame of breast ultrasonic image read in the step E.1 as the input of the XGB OST, and obtaining the accurate detection and classification result of the corresponding case.
Step F: and carrying out space synthesis on the continuously scanned breast ultrasonic images by an image fusion and splicing method to generate an integral three-dimensional prediction image result of the breast area.
F.1, the whole mammary gland area is of a three-dimensional structure, and the continuous scanning mammary gland ultrasonic images are synthesized in time by multi-frame mammary gland ultrasonic images; thus for the entire breast area, the successive breast ultrasound images are ordered according to the scanning direction;
f.2, spatially synthesizing the continuously scanned breast ultrasound images through an image fusion and splicing method to obtain a three-dimensional image of the whole breast area;
f.3, adjusting the accurate detection result of the case obtained in the step E through projective transformation to obtain a three-dimensional coordinate corresponding to the lesion area;
and step F.4, visually outputting the whole three-dimensional prediction image result of the mammary gland region.
Step G: and feeding back the three-dimensional prediction image result to doctors and experts in related fields on line so as to provide timely consultation and treatment for patients.
Therefore, the whole process of the real-time detection and decision fusion method for continuously scanning the breast ultrasound image is realized.
Finally, it should be noted that: although the embodiments of the present invention have been described in conjunction with the accompanying drawings, it will be apparent to those skilled in the art that various modifications may be made without departing from the principles of the invention and these are considered to fall within the scope of the invention. While the foregoing is directed to the preferred embodiment of the present invention, it is not intended that the invention be limited to the embodiment and the drawings disclosed herein. Equivalents and modifications may be made without departing from the spirit of the disclosure, which is to be considered as within the scope of the invention.
Claims (4)
1. A real-time detection and decision fusion method for continuously scanning breast ultrasound images is characterized in that: comprises the following steps:
the method comprises the following steps: the method for manufacturing the training data set for automatically detecting the lesion area specifically comprises the following substeps:
step 1.1, collecting a plurality of breast ultrasonic images;
wherein, the collected multiple breast ultrasonic images comprise images with labels; in addition, the plurality of breast ultrasound images acquired include breast ultrasound images acquired from different instruments;
step 1.2, generating various labeling templates for the image with the label collected in the step 1.1; each labeling template is a template image corresponding to a certain label;
step 1.3, generating image sub-regions with the size consistent with that of the labeling template by respectively taking each pixel of the breast ultrasound image as a center for each labeling template generated in the step 1.2, and calculating the correlation degree r of the image sub-regions and the labeling templates, wherein the calculation formula is as follows:
wherein A ismnAs a matrix of image sub-regions, BmnLabeling a template matrix; the image subregion matrix is pixel values in the image subregion, and the labeled template matrix is pixel values in the labeled template; m is the width of the image subregion, and n is the height of the image subregion;is AmnThe average value of (a) of (b),is BmnThe calculation formula is as follows:
step 1.4, setting a corresponding threshold value for each labeling template, and judging that a label exists in an image subregion when the correlation degree r of the image subregion and the labeling template is greater than the corresponding threshold value;
step 1.5, traversing the breast ultrasound image by a sliding window method, and detecting whether a label exists in an image sub-region taking each pixel as a center in the image; if the label exists, recording the label type corresponding to the image sub-area and the corresponding correlation degree;
step 1.6, if a certain image subregion in the breast ultrasound image corresponds to a plurality of labels, comparing the correlation degrees of different labels, and setting the label type corresponding to the maximum correlation degree as the corresponding label template type in the image subregion; namely, through the steps 1.1 to 1.6, a template image of the labeling area with the size consistent with that of the breast ultrasound image is generated;
the template image of the labeling area contains the pixel value in the labeling area which is zero, and the pixel value in the non-labeling area is the same as the pixel value at the position corresponding to the breast ultrasound image;
step 1.7, based on the labeled area template image generated in the step 1.6, automatically matching and filling labeled areas in the labeled area template image by using a matching method to generate a labeled mammary gland ultrasonic image training set; calculating the number of labeling areas in the labeling area template image, setting the number of the labels as k, and setting an initialization labeling cycle count value i as 1;
step 1.8, calculating the total number of pixel points in the ith labeling area in the template image of the labeling area, and setting the total number of the pixel points as Si(ii) a For the ith labeling area, numbering each pixel point in the labeling area from the most marginal pixel of the labeling area clockwise and inwards in sequence; setting an initialization numbering cycle count value j to 1;
step 1.9, setting a point p to represent a jth pixel point in an ith labeling area; the new gray value corresponding to the point p is I (p), and the calculation formula is as follows:
wherein, B(p) is the neighborhood of the pixel point p, is the radius of the neighborhood, point q is the pixel in the neighborhood of point p, p-q is the distance from point q to point p, (-/(q)) is the gradient value for point q, ω (p, q) is the weight function between point q and point p, for limitingDetermining the contribution of each pixel in the neighborhood, and calculating the formula as follows:
ω(p,q)=dir(p,q)·dst(p,q)·lev(p,q),
wherein the geometric distance factorOrientation factorLevel set distance factorT (p) is a distance vector from a point p to the edge of the region, T (q) is a distance vector from a point q to the edge of the region, | | p-q | | luminance2Is the square of the straight line distance from the point p to the point q, | T (p) -T (q) | is the absolute value of the length of the vector T (p) -T (q), and N (p) is the normal vector direction corresponding to the p neighborhood;
step 1.10, judge number cycle count value j reach Si(ii) a If yes, jumping to step 1.11; if not, making j equal to j +1, and jumping to step 1.9;
step 1.11, judging whether a labeling cycle count value i reaches k; if so, obtaining the breast ultrasound image data set after automatic label removal; if not, making i equal to i +1, and jumping to the step 1.8;
step 1.12, preprocessing all images in the breast ultrasound image data set obtained in step 1.11 after automatic de-labeling; automatically intercepting and retaining breast ultrasonic images containing breast information areas aiming at the breast ultrasonic images acquired from different instruments, and storing the images into a breast ultrasonic image data set after automatic interception;
step 1.13, adjusting all images in the breast ultrasound image data set obtained by the step 1.12 after the automatic interception to the same size to obtain a breast ultrasound image data set after the size adjustment;
step 1.14, the breast ultrasound image data set obtained in step 1.13 after the size adjustment contains RGB red, green and blue three-channel color images, the RGB red, green and blue three-channel color images are converted into gray level images, and the converted gray level images form a training set for automatic detection of lesion areas; the calculation formula is as follows:
Graymn=α·Rmn+β·Gmn+γ·Bmn,
wherein R ismnFor red channel information in color images, GmnFor green channel information in color images, BmnFor blue channel information in color images, GraymnThe image is a gray image, m is the width of the gray image, n is the height of the gray image, alpha is a constant and has a value range of more than 0.25 and less than 0.3, beta is a constant and has a value range of more than 0.55 and less than 0.6, and gamma is a constant and meets the condition that gamma is 1-alpha-beta;
step two: building, initializing and training a fusion network for automatic segmentation and classification of lesion areas to obtain a trained prediction model for automatic segmentation and classification of lesion areas, and specifically comprising the following substeps:
step 2.1, building and initializing a fusion network for automatically detecting and classifying lesion areas;
wherein, the fusion network for automatically detecting and classifying the lesion area is called the fusion network for short; the fusion network comprises a feature extraction stage and an example segmentation and classification stage, wherein the feature extraction stage comprises a convolutional layer and a region proposal network, and the example segmentation and classification stage comprises a convolutional layer and a full connection layer;
2.2, randomly dividing the training set for automatically detecting the lesion area obtained in the step one into a training set and a verification set according to a proportion;
2.3, inputting the training set into the fusion network, training the fusion network based on a back propagation algorithm and updating a network model; inputting the verification set into the fusion network after completing the network training each time for verifying the accuracy of the segmentation and classification of the fusion network; after k times of network training, a trained prediction model which can be used for a lesion area automatic segmentation and classification fusion network can be obtained, and the prediction model is called as a prediction model for short;
step three: embedding the prediction model into a cloud end, namely building a fusion network at the cloud end, and carrying out real-time detection on lesion areas of all frames of breast ultrasound images of continuously scanned breast ultrasound images, wherein the method specifically comprises the following substeps:
step 3.1, building a fusion network at the cloud, and loading the prediction model obtained in the step two;
3.2, uploading the continuously scanned breast ultrasound images to a cloud end;
the continuous scanning mammary gland ultrasonic image is a continuous scanning image of the whole mammary gland area to be predicted, which is acquired by an ultrasonic instrument;
3.3, reading each frame of breast ultrasound image in the continuously scanned breast ultrasound images, and carrying out real-time detection on a lesion area of each frame of image by using a prediction model to obtain a lesion area segmentation and classification result of each frame of breast ultrasound image;
step four: defining and training a decision algorithm and obtaining corresponding algorithm parameters;
the decision algorithm is that a plurality of decision tree models are constructed according to a given training data set, each decision tree can predict an instance, and finally, a final accurate prediction result is obtained through a voting method;
step five: predicting the accurate detection and classification result of the corresponding case of the continuously scanned breast ultrasound image through a decision algorithm, and specifically comprising the following substeps:
step 5.1, reading the lesion area segmentation and classification result of each frame of breast ultrasound image obtained in the step 3.3;
step 5.2, according to the spatial position relation of the scanning area of each frame of breast ultrasound image, taking the lesion area segmentation and classification result of each frame of breast ultrasound image read in the step 5.1 as the input of a decision algorithm to obtain the accurate detection and classification result of the corresponding case;
step six: through an image fusion and splicing method, the continuous scanning mammary gland ultrasonic image is subjected to space synthesis to generate an integral three-dimensional prediction image result of a mammary gland region, and the method specifically comprises the following substeps:
6.1, the whole mammary gland area is of a three-dimensional structure, and the continuous scanning mammary gland ultrasonic images are synthesized by multi-frame mammary gland ultrasonic images in time; thus for the entire breast area, the successive breast ultrasound images are ordered according to the scanning direction;
6.2, spatially synthesizing the continuously scanned breast ultrasound images through an image fusion and splicing method to obtain a three-dimensional image of the whole breast area;
the image fusion and splicing method is a method for splicing a plurality of images with overlapped parts, which are acquired by a multi-source channel, into a seamless high-resolution image;
6.3, adjusting the accurate detection result of the case obtained in the fifth step through projective transformation to obtain a three-dimensional coordinate corresponding to the lesion area;
6.4, visually outputting the whole three-dimensional prediction image result of the mammary gland region;
step seven: and feeding back the three-dimensional prediction image result to doctors and experts in related fields on line so as to provide timely consultation and treatment for patients.
2. The method of claim 1, wherein the method comprises the steps of: the first step is specifically as follows: collecting a plurality of breast ultrasonic images, and automatically detecting the marked areas in the breast ultrasonic images to generate corresponding marked area template images; based on the template image of the marked area, automatically matching and filling the marked area by using a matching method to generate an automatically de-marked breast ultrasonic image data set; and finally, carrying out image preprocessing on the breast ultrasonic image data set with the automatic label removal function to generate a training set for automatically detecting the lesion area.
3. The method of claim 1, wherein the method comprises the steps of: in step 2.2, the training set for automatically detecting the lesion area is randomly divided into the training set and the verification set according to the proportion range of 2:1 to 9: 1.
4. The method of claim 1, wherein the method comprises the steps of: in step 2.3, k is an integer and the value range is 0< k < ∞.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910088044.7A CN109859184B (en) | 2019-01-29 | 2019-01-29 | Real-time detection and decision fusion method for continuously scanning breast ultrasound image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910088044.7A CN109859184B (en) | 2019-01-29 | 2019-01-29 | Real-time detection and decision fusion method for continuously scanning breast ultrasound image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109859184A CN109859184A (en) | 2019-06-07 |
CN109859184B true CN109859184B (en) | 2020-11-17 |
Family
ID=66896814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910088044.7A Active CN109859184B (en) | 2019-01-29 | 2019-01-29 | Real-time detection and decision fusion method for continuously scanning breast ultrasound image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109859184B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427954A (en) * | 2019-07-26 | 2019-11-08 | 中国科学院自动化研究所 | The image group feature extracting method of multizone based on tumor imaging |
CN110910404B (en) * | 2019-11-18 | 2020-08-04 | 西南交通大学 | Anti-noise data breast ultrasonic nodule segmentation method |
CN111210445A (en) * | 2020-01-07 | 2020-05-29 | 广东技术师范大学 | Prostate ultrasound image segmentation method and equipment based on Mask R-CNN |
CN111275617B (en) * | 2020-01-09 | 2023-04-07 | 云南大学 | Automatic splicing method and system for ABUS breast ultrasound panorama and storage medium |
CN111325725A (en) * | 2020-02-19 | 2020-06-23 | 京东方科技集团股份有限公司 | Retina image recognition method and device, electronic equipment and storage medium |
CN112200815B (en) * | 2020-10-12 | 2023-06-27 | 徐州医科大学附属医院 | Thyroid nodule ultrasound image segmentation method based on semantic segmentation network PSPNet |
CN112651400B (en) * | 2020-12-31 | 2022-11-15 | 重庆西山科技股份有限公司 | Stereoscopic endoscope auxiliary detection method, system, device and storage medium |
CN114444621A (en) * | 2022-04-11 | 2022-05-06 | 北京航空航天大学杭州创新研究院 | Chess situation conversion method and device based on template matching and storage medium |
CN116913479B (en) * | 2023-09-13 | 2023-12-29 | 西南石油大学 | Method and device for determining triple negative breast cancer patient implementing PMRT |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104546013A (en) * | 2013-10-24 | 2015-04-29 | Ge医疗系统环球技术有限公司 | Method and device for processing breast ultrasound image and ultrasonic machine |
CN105913086A (en) * | 2016-04-12 | 2016-08-31 | 福州大学 | Computer-aided mammary gland diagnosing method by means of characteristic weight adaptive selection |
CN106127263A (en) * | 2016-07-06 | 2016-11-16 | 中国人民解放军国防科学技术大学 | The human brain magnetic resonance image (MRI) classifying identification method extracted based on three-dimensional feature and system |
JP6924031B2 (en) * | 2016-12-28 | 2021-08-25 | 日本放送協会 | Object detectors and their programs |
CN108268510B (en) * | 2016-12-30 | 2022-01-28 | 华为技术有限公司 | Image annotation method and device |
CN108241561A (en) * | 2017-12-25 | 2018-07-03 | 深圳回收宝科技有限公司 | A kind of generation method, server and the storage medium of terminal detection model |
CN108288269A (en) * | 2018-01-24 | 2018-07-17 | 东南大学 | Bridge pad disease automatic identifying method based on unmanned plane and convolutional neural networks |
CN108665461B (en) * | 2018-05-09 | 2019-03-12 | 电子科技大学 | A kind of breast ultrasound image partition method corrected based on FCN and iteration sound shadow |
-
2019
- 2019-01-29 CN CN201910088044.7A patent/CN109859184B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109859184A (en) | 2019-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109859184B (en) | Real-time detection and decision fusion method for continuously scanning breast ultrasound image | |
Chen et al. | Automatic segmentation of individual tooth in dental CBCT images from tooth surface map by a multi-task FCN | |
CN111091589B (en) | Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning | |
Candemir et al. | A review on lung boundary detection in chest X-rays | |
Zeng et al. | Cascaded convolutional networks for automatic cephalometric landmark detection | |
Liu et al. | Segmentation of lung nodule in CT images based on mask R-CNN | |
AU2017292642B2 (en) | System and method for automatic detection, localization, and semantic segmentation of anatomical objects | |
WO2020224123A1 (en) | Deep learning-based seizure focus three-dimensional automatic positioning system | |
WO2018120942A1 (en) | System and method for automatically detecting lesions in medical image by means of multi-model fusion | |
US7876938B2 (en) | System and method for whole body landmark detection, segmentation and change quantification in digital images | |
El-Baz et al. | Automatic analysis of 3D low dose CT images for early diagnosis of lung cancer | |
CN107203989A (en) | End-to-end chest CT image dividing method based on full convolutional neural networks | |
CN109410188A (en) | System and method for being split to medical image | |
Kong et al. | Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network | |
Bush | Lung nodule detection and classification | |
US20220092786A1 (en) | Method and arrangement for automatically localizing organ segments in a three-dimensional image | |
CN115210755A (en) | Resolving class-diverse loss functions of missing annotations in training data | |
Pal et al. | A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation | |
Pavithra et al. | An Overview of Convolutional Neural Network Architecture and Its Variants in Medical Diagnostics of Cancer and Covid-19 | |
US12046018B2 (en) | Method for identifying bone images | |
Wang et al. | Deep convolutional neural network with segmentation techniques for chest x-ray analysis | |
Xu et al. | Improved cascade R-CNN for medical images of pulmonary nodules detection combining dilated HRNet | |
Nasim et al. | Review on multimodality of different medical image fusion techniques | |
CN115690518A (en) | Enteromogenous severity classification system | |
Tan et al. | A segmentation method of lung parenchyma from chest CT images based on dual U-Net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20190607 Assignee: Fuge Technology (Tianjin) Co.,Ltd. Assignor: Niu Qi Contract record no.: X2021990000414 Denomination of invention: A real-time detection and decision fusion method for continuous scanning breast ultrasound images Granted publication date: 20201117 License type: Common License Record date: 20210713 |