CN113935961A - Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method - Google Patents

Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method Download PDF

Info

Publication number
CN113935961A
CN113935961A CN202111153500.5A CN202111153500A CN113935961A CN 113935961 A CN113935961 A CN 113935961A CN 202111153500 A CN202111153500 A CN 202111153500A CN 113935961 A CN113935961 A CN 113935961A
Authority
CN
China
Prior art keywords
image
molybdenum target
mlo
pectoral muscle
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111153500.5A
Other languages
Chinese (zh)
Inventor
刘伟
刘承乾
仵晨阳
卫毅然
贺晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202111153500.5A priority Critical patent/CN113935961A/en
Publication of CN113935961A publication Critical patent/CN113935961A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention belongs to a breast molybdenum target image pectoral muscle segmentation method, and aims to solve the technical problems that in the current breast molybdenum target images with oblique views at the inner and outer sides, both the traditional image segmentation method and the deep learning method adopted for pectoral muscle region segmentation cannot perform pectoral muscle segmentation on a computer radiography CR image, the algorithm is complex, the practical application is difficult, and the capacity of a test data set is small, so that a robust breast molybdenum target MLO (multi-layer object distance) view image pectoral muscle segmentation method is provided, the pectoral muscle region contour in a plurality of molybdenum target MLO view images is sketched to obtain a pectoral muscle region contour binary mask image, the molybdenum target MLO view image and the corresponding pectoral muscle region contour binary mask image are subjected to deep network training to obtain a deep network model, the molybdenum target MLO view image is subjected to pectoral muscle region segmentation through the deep network model to obtain an initial segmentation result, and optimizing the initial segmentation result by a polynomial curve fitting method and a filling algorithm to obtain a pectoral muscle segmentation result.

Description

Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method
Technical Field
The invention belongs to a breast molybdenum target image pectoral muscle segmentation method, and particularly relates to a robust breast molybdenum target MLO (multi-level object fusion) visual angle image pectoral muscle segmentation method.
Background
The mammography is the most important means for clinical early detection of breast cancer, and plays an important role in reducing breast cancer mortality, and has two shooting views, namely, medial-lateral oblique (MLO) and caudal-axial (CC), wherein the former refers to the X-ray beam shooting from the inside up to the 45-degree angle to the outside down, and the latter also refers to the up-down, and refers to the X-ray beam shooting from the top down. The traditional detection method is to judge the image through manual film reading after the photographing is finished, and has low efficiency and is easy to misjudge, so an Artificial Intelligence (AI) technology is introduced and is applied to mammary molybdenum target image analysis to form computer-aided diagnosis (CAD) for improving the efficiency and reducing the misjudgment.
At present, the main process of computer-aided diagnosis of mammary gland is to output the position of suspected focus and the judgment result of benign and malignant diseases for manual reference and decision after molybdenum target image to be diagnosed is processed by image preprocessing, focus detection and segmentation, feature calculation, focus classification and other links. The breast region segmentation is a key ring in image preprocessing, particularly for molybdenum target images obtained at the inner and outer oblique positions, the pectoral muscle region and breast tissues have very similar texture modes, the breast region images containing pectoral muscles can interfere with lesion detection and segmentation and feature calculation, and false positive of computer-aided diagnosis is improved. In addition, although the craniocaudal axis position image does not include the pectoral muscle region, when the molybdenum target image is subjected to the breast tissue dual-view quantitative analysis, the pectoral muscle region in the oblique view angle images on the inner and outer sides needs to be segmented. Therefore, in computer-aided diagnosis of the breast, it is very important to accurately segment the pectoral muscle region in the medial-lateral oblique position image.
Aiming at the problem that the thoracic muscle region in the oblique position images at the inner side and the outer side is difficult to segment, the current solution mainly comprises the following steps: conventional image segmentation methods (such as a threshold method, a region growing method, a straight line fitting method, a curve fitting method and the like) and a deep learning method for realizing pectoral muscle segmentation by means of a deep learning model. However, these two methods still have many disadvantages: (1) both can only process screen mammography and full-view digital mammography in mammography digitization, and the computer cannot perform pectoral muscle segmentation on a radiographic CR image; (2) the algorithms are complex, the calculated amount is large, and the practical application is difficult; (3) the test data set is small in size.
Disclosure of Invention
The invention provides a robust breast molybdenum target MLO visual angle image breast muscle segmentation method, which aims to solve the technical problems that in the current inside and outside oblique breast molybdenum target images, both a traditional image segmentation method and a deep learning method adopted for breast muscle region segmentation cannot perform breast muscle segmentation on a computer radiography CR image, the algorithm is complex, the practical application is difficult, and the capacity of a test data set is small.
In order to achieve the purpose, the invention provides the following technical scheme:
a robust breast molybdenum target MLO visual angle image pectoral muscle segmentation method is characterized by comprising the following steps:
s1, off-line training
S1.1, sketching the pectoral muscle region contour in the multiple molybdenum target MLO visual angle images to obtain a pectoral muscle region contour binary mask image;
s1.2, performing depth network training on the molybdenum target MLO visual angle image and the corresponding pectoral muscle region contour binary mask image to obtain a depth network model;
s2, online segmentation
S2.1, carrying out pectoral muscle region segmentation on the molybdenum target MLO visual angle image through the depth network model to obtain an initial segmentation result;
and S2.2, optimizing the initial segmentation result through a polynomial curve fitting method and a filling algorithm to obtain a pectoral muscle segmentation result.
Further, in step S1.1, the step of outlining the pectoral muscle region in the molybdenum target MLO viewing angle image is to outlining the pectoral muscle region in the molybdenum target MLO viewing angle image by LabelMe software.
Further, in step S1.2, specifically, performing deep network training on the molybdenum target MLO view image and the pectoral muscle region contour binary mask image by using any one of a U-Net network, an FCN network, a PSP network, and a SegNet network.
Further, step S0 is included before step S1.1 and step S1.2, and step S1 is executed after determining whether the molybdenum target image is an MLO view image, if so, unifying the molybdenum target image into a left image, otherwise, discarding the molybdenum target image;
the specific steps for judging whether the molybdenum target image is an MLO view image are as follows:
if the type of the molybdenum target image is FFDM, reading the visual angle information from the file name information of the molybdenum target image for judgment; and if the type of the molybdenum target image is CR or SFM, obtaining visual angle information by analyzing the image data for judgment.
Further, in step S0, the determining the perspective information by analyzing the image data includes determining the perspective information by a character recognition method if the perspective information can be obtained by the character recognition method, and otherwise determining the perspective information by the image recognition method.
Further, in step S0, the determining by the character recognition method specifically includes:
sa, character positioning
Binarizing the molybdenum target image by an Otsu threshold value method to obtain a binarized image, analyzing 8 neighborhood connected domains of the binarized image, arranging the areas of non-breast areas from large to small, removing noise according to a preset threshold value, and taking the rest areas as character positioning target areas;
using a connected domain generation algorithm based on the combination of image gray gradient and the maximum stable extremum region, and performing preliminary filtering on a non-text region in the character positioning target region by analyzing the characteristics of the connected domain; calculating mixed features based on the binary image and Gabor filtering, and performing secondary classification on the text region and the non-text region by using a pre-trained shallow neural network to obtain a character position;
sb, character segmentation
Using the peripheral outline coordinates of the characters at the character positions, calculating the slope of the characters by adopting a geometric method, correcting the slope, binarizing by using an Otsu algorithm, and removing part of background by using a projection method to obtain a binarized area only containing character contents;
and searching candidate segmentation points in the binarization area by using a left and right contour extreme value method, calculating the minimum value of the difference between the coordinates of the leftmost contour and the rightmost contour of the characters in the binarization area only containing character content, taking the minimum value as an initial segmentation point, determining the optimal segmentation path of the character text by using a dripping algorithm and a shortest path method, segmenting the character text, identifying characters in the character text by using a convolutional neural network, combining the characters into character strings, and if the character strings contain MLO, obtaining an MLO view angle image, otherwise, obtaining a non-MLO view angle image.
Further, in step S0, the determining the perspective information obtained by the image recognition method specifically includes:
sc, obtaining a breast mask image
Binarizing the molybdenum target image by a threshold method, analyzing 8 neighborhood connected domains of the molybdenum target image, and reserving a region with the largest area in an analysis result, namely a breast mask image;
sd, unifying the molybdenum target image into a left image
Projecting the breast mask image to the Y direction in a space rectangular coordinate system, recording the maximum value in a projection curve as an A point, and recording a straight line perpendicular to an X axis through the A point, dividing the image into two parts through the straight line, wherein the gray value accumulation sum of pixel points in the left half part is recorded as LA, the gray value accumulation sum of pixel points in the right half part is recorded as RA, if LA is greater than RA, the molybdenum target image is in the left position, otherwise, the molybdenum target image is in the right position;
se, visual angle for identifying molybdenum target image
And (3) taking a linear kernel support vector machine as a classifier, and performing feature recognition on the molybdenum target image through a local binary pattern to obtain the visual angle information of the molybdenum target image.
Further, step S1.2 is specifically to perform deep network training on the molybdenum target MLO visual angle image and the pectoral muscle region contour binary mask image by using a U-NET full convolution network, where in the U-NET full convolution network, a convolution kernel is 3 × 3, an activation function is a modified linear unit, the pooling operation is maximum pooling, and a loss function uses a binary cross entropy loss function.
Further, step S2.2 specifically includes:
s2.2.1, performing binarization processing on the initial segmentation result by an Otsu threshold method;
s2.2.2, scanning the pixel points of the image subjected to binarization line by line, taking the pixel point with the first gray value of 0 in each line as the edge point of each line, and connecting the edge points of each line to obtain an initial edge curve;
s2.2.3, converting coordinates of each point in the initial edge curve into rectangular coordinates, and fitting by adopting a least square method to obtain an optimized edge curve;
s2.2.4, filling the optimized edge curves from top to bottom and from left to right in sequence to obtain the pectoral muscle segmentation result.
Compared with the prior art, the invention has the beneficial effects that:
1. the robust breast molybdenum target MLO (multi-level object) visual angle image pectoral muscle segmentation method can process various molybdenum target images of FFDM, CR and SFM, and breaks through the limitation that the conventional segmentation method only can process FFDM and SFM images.
2. The invention provides an MLO (multi-level object oriented) view angle recognition algorithm of a molybdenum target based on image processing, which can be applied to SFM (surface field mode) and CR (computed radiography) type molybdenum target images.
3. The invention carries out the segmentation of the pectoral muscle area based on the deep neural network, avoids the complexity of the design of the traditional image processing technology and simultaneously improves the speed of online processing.
4. The invention provides a post-processing method based on quadratic curve fitting, which aims to solve the problem that the initial segmentation result output by a deep neural network has an under-segmentation condition.
Drawings
FIG. 1 is a schematic view of SFM and CR breast images in accordance with an embodiment of the present invention;
fig. 2 is a schematic diagram of a left half area and a right half area obtained when a viewing angle is identified by an image identification method according to an embodiment of the present invention;
FIG. 3 is an image with under-segmentation using depth model segmentation according to an embodiment of the present invention;
FIG. 4 is a graph of a large contour curve obtained before a pectoral contour line is fitted in post-processing according to an embodiment of the present invention;
FIG. 5 is an image obtained by contour fitting of the thoracic muscle in post-processing according to an embodiment of the present invention;
FIG. 6 is a diagram of the final pectoral muscle segmentation result obtained after the under-segmentation filling process according to the embodiment of the present invention;
fig. 7 is an original image without segmentation.
Detailed Description
The technical solution of the present invention will be clearly and completely described below with reference to the embodiments of the present invention and the accompanying drawings, and it is obvious that the described embodiments do not limit the present invention.
The breast molybdenum target MLO visual angle image pectoral muscle segmentation method can process all molybdenum target MLO visual angle image data including SFM, CR and FFDM types, and mainly comprises two large processes, namely off-line training and on-line segmentation. And (3) sending the molybdenum target MLO visual angle image subjected to preprocessing and outlining to a deep network model for training in an off-line training mode to obtain a deep learning model capable of segmenting pectoral muscle regions. The method comprises the steps of preprocessing a molybdenum target image to be segmented by online segmentation, segmenting pectoralis muscles of the preprocessed molybdenum target image by using a deep learning model obtained in an off-line training stage, and optimizing a segmentation result by using a post-processing step to solve the problems of under-segmentation and the like.
The segmentation method of the invention comprises the following steps:
1. off-line training phase
And (4) visual angle judgment: and (4) judging the view angle of the input molybdenum target image, if the view angle is MLO, carrying out the next step, and if not, exiting, and not carrying out the subsequent step (the molybdenum target image with CC view angle does not have a pectoral muscle region). The molybdenum target image types that can be processed by the present invention include SFM, CR, and FFDM, unlike conventional systems (which process only SFM and FFDM). The input signal of this part is a molybdenum target image of the breast for training, and the output signal is an MLO view image which only retains the breast area and is inverted left and right.
Manual labeling: the contour of the pectoral region of the MLO view image is outlined using image annotation software (e.g., LabelMe), and the delineation can be optionally performed by one skilled in the art. The input signal of the part is an MLO visual angle image, and the output signal is a pectoral muscle region contour binary mask image.
Training a depth model: and training the depth network according to the MLO visual angle image and the pectoral muscle region contour binary mask image by using the depth network such as U-Net. The input signal of the part is an MLO visual angle image and a pectoral muscle region contour binary mask image thereof, and the output signal is a trained depth network model.
2. On-line segmentation stage
And (4) visual angle judgment: the method is the same as the visual angle judgment method in the off-line training stage.
And (3) pectoral muscle segmentation: and carrying out pectoral muscle region segmentation on the input MLO visual angle image by using a depth network model obtained in an off-line training stage. The input signals of the part are MLO visual angle images and trained depth network models, and the output signals are initial segmentation results.
And (3) post-treatment: and optimizing the initial segmentation result. The input signal of this part is the initial segmentation result and the output signal is the final segmentation result.
Unlike the pectoral muscle segmentation method based on traditional image processing, the present invention is based on a deep neural network. First, a training process must be performed to train a depth network model that automatically segments the pectoral region using a large number of labeled, outlined MLO view images, and the present invention can use all depth models applied to image segmentation, such as U-Net, FCN, PSP Net, and SegNet. The subsequent embodiment shows a method based on the U-Net network implementation.
After the deep network model is trained, whether the molybdenum target image to be segmented is an MLO (multi level offset) view angle image or not is judged firstly, and the image type is FFDM, CR or SFM in the judging process. If it is FFDM, the view information (whether it is MLO) can be read directly from the file name information; if the view angle information is CR or SFM, analyzing the image data by using an image processing technology to obtain the view angle information, wherein the image processing technology can adopt two schemes of view angle Character Recognition and image Recognition, the view angle Character Recognition adopts a Character Recognition method (OCR) for Recognition, the image Recognition adopts a support vector machine classifier and local binary pattern image texture feature Recognition, and after the view angle information is recognized, if the view angle information is an MLO view angle, the subsequent steps are carried out. And when the visual angle information is identified, the image is turned left and right for subsequent uniform processing.
And automatically segmenting the image subjected to the visual angle information processing by using the obtained depth network model to obtain an initial segmentation result, wherein the initial segmentation result is often an under-segmentation result and needs to be further optimized, so that a fitting curve is obtained by using a polynomial fitting mode, and a final optimized segmentation result is obtained by using a filling algorithm.
The following is a detailed description of the above method:
(1) visual angle judgment
And (4) judging the view angle of the input molybdenum target image, if the view angle is MLO, carrying out the next step, and if not, exiting (the molybdenum target image with the CC view angle does not have a pectoral muscle region).
The orientation information of the FFDM image can be directly obtained from the file name, most SFM and CR images cannot obtain the viewing angle information according to the file name, and the image data must be analyzed to obtain the viewing angle information. When the visual angle is judged, firstly analyzing the file name of the image, and extracting visual angle information if the information can be obtained; otherwise, the following steps are adopted to obtain the view angle information.
As shown in fig. 1, the view angle text (RMLO or LMLO) in the SFM and CR breast images is important information for identifying the view angle, from which the view angle can be accurately determined. The visual angle characters and the layout thereof are relatively simple, and a high Recognition rate can be achieved by adopting a Character Recognition method (OCR). If the mammary gland image has no visual angle characters or contains non-visual angle characters, the visual angle can be identified by adopting an image identification method.
The first step in the visual angle judgment of character recognition is character positioning. Since the breast area contains no view angle text, text positioning only requires searching for non-breast areas. Firstly binarizing the mammary gland image by an Otsu threshold value method, carrying out 8-neighborhood connected domain analysis on the binarized image, arranging the areas of all the regions except a breast region (the region with the largest area) from large to small after the connected domain analysis, setting a region with a small threshold value and removing small area (noise and the like), and taking the remaining region as a character positioning target region. The character positioning algorithm uses a connected domain generation algorithm based on the combination of image gray gradient and the maximum stable extremum region, and performs preliminary filtering on a non-text region by analyzing the characteristics of the connected domain; and then, calculating mixed features based on the binary image and Gabor filtering, and carrying out secondary classification on the text region and the non-text region by using a pre-trained shallow neural network (BP network) so as to obtain candidate text positions. The character positioning algorithm can refer to specific contents in the research on the image Chinese character positioning and identification method under the complex background of journal of West 'an post and telecommunications university' 2018.
After the characters are positioned, the characters need to be segmented for recognition. The character positioning result comprises peripheral outline coordinates of characters, the slope of the characters can be calculated by using the coordinates through a geometric method and inclination correction is carried out, then binaryzation is carried out on the corrected characters through an Otsu algorithm, partial background is removed through a projection method, and a binaryzation area only comprising character contents is obtained; and (4) searching candidate segmentation points by using a left and right contour extreme value method according to the characteristics of the characters at the visual angle, namely calculating the minimum value of the difference between the coordinates of the left and right contours of the binary characters. The value can be regarded as the position with the minimum adhesion degree among letters, and the position is taken as an initial segmentation point; and determining the optimal segmentation path of the text label by adopting a dripping algorithm and a shortest path idea from the initial segmentation point. And calculating the area ratio of each segmented character, if the area ratio is smaller than a preset threshold value, indicating that partial character segmentation fails, and directly entering a subsequent image visual angle identification part without performing character identification at the moment. Convolutional Neural Networks (CNN) have been applied for character recognition, and CNN is used to recognize a single character after segmentation is completed. The characters to be recognized have 6 categories, namely, a viewing angle character category { L, R, M, O, C } and a non-viewing angle character category X. And after the visual angle characters are identified, combining the visual angle characters into a character string, if the character string contains correct visual angle labels (namely LMLO, RMLO and the like), the identification is successful, and if not, the identification fails. The invention uses a mature convolutional neural network LeNet-5 model, the used model comprises 5 convolutional layers, the number of filters and the sizes of convolutional cores of the 1 st to 5 th layers are 96/11 multiplied by 11, 256/5 multiplied by 5, 384/3 multiplied by 3, 384/3 multiplied by 3 and 256/3 multiplied by 3 respectively, the window sizes of the pooling layers are all 3, the window sizes are all connected with a ReLU activation function, and a softmax function is adopted as a loss function of an output layer.
If any link of character positioning, segmentation and identification is invalid, the mammary gland image does not contain visual angle characters or contains non-visual angle characters, and the visual angle can be identified by using an image identification method.
Aiming at the characteristics of the mammary gland image, the left and right positions of the breast can be distinguished firstly: binarizing the mammary gland image by an Otsu threshold method, performing 8-neighborhood connected domain analysis on the binarized image, and only keeping a region with the largest area in a result, namely a breast mask image (breast region); then, the image is projected in the Y direction, and the position corresponding to the maximum value in the projection curve is A. As shown in FIG. 2, a straight line perpendicular to the X-axis is drawn through A, and the straight line divides the whole image into two parts, defining the left half as LA and the right half as RA. If LA > RA, the breast image is left oriented, otherwise it is right oriented. The right orientation image is mirrored and converted into the left orientation by taking the left orientation as a reference, so that the view angle four-classification problem (LMLO, LCC, RMLO and RCC) can be converted into the CC and MLO view angle two-classification problem (LMLO and LCC) during image recognition, the problem processing is simplified, and only the breast area image is processed during image recognition.
Through experimental verification, the image view angle identification part uses a linear kernel Support Vector Machine (SVM) as a classifier, and the image features use Local Binary Patterns (LBP). The visual angle type (CC or MLO) of the molybdenum target image to be processed can be directly obtained through the pre-trained SVM model. Meanwhile, through the processing of the above steps, the non-breast area information (such as text labels) of the molybdenum target image is removed, and the image of the RMLO (right media-lateral oblique) view angle is inverted into an LMLO (left media-lateral oblique) view angle image so as to facilitate subsequent uniform processing.
It should be noted that, if the input is an FFDM type molybdenum target image, the present invention may also binarize the mammary gland image by using an Otsu threshold method, and perform 8-neighborhood connected domain analysis on the binarized image. Only the breast area (area with the largest area) obtained after the connected domain analysis is reserved, and meanwhile, the RMLO view angle image is converted into the LMLO view angle image according to the orientation information of the FFDM image so as to facilitate subsequent unified processing.
(2) Pectoral muscle segmentation
The invention uses a deep network-based pectoral muscle region segmentation method. It should be noted that one implementation of the present invention is to use a deep network based on U-Net network, and other deep networks that can be used for image segmentation, such as FCN, PSP Net and SegNet, can also be applied to the present invention.
The pectoral muscle segmentation is described herein using a U-Net full convolution network. The network architecture can be generally divided into two parts, a down-sampling operation, called the compression path, by the first part and an up-sampling operation, called the expansion path, by the second part. The input image size of the U-Net network is 572x, the compression path has 4 layers in total, that is, 4 downsampling operations are performed, each layer is downsampled by using 2 convolution operations and one maximum pooling operation, finally, a Feature Map (Feature Map) of 32x32 is obtained at the fifth layer, and the Feature Map of 28x28 is obtained by the Feature Map of the fifth layer through 2 convolution operations. The expansion path also has 4 layers, namely 4 times of upsampling operation is carried out, and different from downsampling, one upsampling operation is carried out, firstly, the size of the Feature Map is multiplied by 2 through deconvolution, meanwhile, the number is halved, then, the upsampling operation is carried out on the upsampling operation which is spliced with the Feature Map of the compression path for 2 times, the upsampling operation is carried out for 4 times, and finally, the size of an output image is 388x 388.
When splicing is performed, the Feature Map of the compression path needs to be cut to have the same size as the Feature Map on the expansion path of the same layer, for example, at the 4 th layer of the network, the Feature Map of the compression path is 64x64x512, and is cut to 56x56 before splicing, and is then spliced with the Feature Map of 56x56x512 on the compression path, and finally the Feature Map of 56x56x1024 is obtained.
The parameters of the U-Net network used above are: the convolution kernel size is 3x3, the activation function uses modified Linear units (ReLu), the pooling operation uses maximum pooling, the normalization operation is used, the loss function uses Binary Cross-Entropy loss function (Binary Cross-entry), and the loss function H uses a Binary Cross-Entropy loss functionpThe formula (Y, P) is as follows:
Figure BDA0003287930180000101
y represents a set of true values of all pixel points, and P represents a set of calculated values or predicted values of all pixel points; y isiRepresents true value (value is 0 or 1), p (y)i) Representing the predicted value, and N is the number of samples.
The truth value (y) of all pixel points (the number is N) is counted through a binary cross entropy loss functioni) And a calculated or predicted value (p (y)i) Entropy value of).
Due to the fact that the layer jump structure in the U-Net network enables the bottom layer feature and the high layer feature to be well fused, more detailed features of the image are saved in the whole convolution process. Meanwhile, the simple and effective network structure reduces the size of parameters, can effectively avoid the risk of overfitting, and greatly improves the image segmentation effect. The method for segmenting based on the deep network is simple, high in operation speed and obvious in advantage.
(3) Post-treatment
The MLO visual angle molybdenum target image has complex shape of the boundary of the pectoral muscle area, and the pectoral muscle boundary is easy to overlap with fibrous tissues. The result of segmentation using the depth model may appear as an under-segmentation situation as shown in fig. 3. In the under-segmentation result, the pectoral muscle area is not completely filled, and the pectoral muscle boundaries are in discontinuous distribution. In order to solve the above problems, the present invention proposes a post-processing method based on polynomial curve fitting of the edge of the pectoral muscle:
a. image binarization: the segmentation result output by the U-Net model is a gray image, and the binarization processing needs to be carried out on the gray image through an Otsu threshold value method.
b. Obtaining the edge contour of the pectoral muscle: the pectoral muscle region presents distribution of approximate triangles in the upper left corner of the image (the RMLO view image is turned into the LMLO view image), and rough edge pixel points of the hypotenuse of the pectoral muscle are extracted by adopting a horizontal scanning method, specifically, pixel points are scanned line by line, each line only takes a pixel point with a first gray value of 0 until the last line of the image is finished, so that rough edges of the pectoral muscle region are obtained, and the edges present a discontinuous curve as shown in fig. 4.
c. And (3) carrying out contour line fitting on the pectoral muscle: since the pectoral muscle edges in the undersampling result are mostly distributed in a curve, fitting is performed by using a binomial function y (x, W) and a least square method as shown below.
y(x,W)=ω01x+ω2x2
Figure BDA0003287930180000111
Where x is an independent variable and W represents a parameter value ω of the fitting function0、ω1And ω2,ω0As a constant in the fitting function, ω1As a first-order parameter of the fitting function, omega2For fitting the second order parameters of the function, E (W) is the objective function, N is the number of rough edge points of the pectoral region, xnIs the abscissa, t, of the rough edge point of the pectoral regionnIs the ordinate of the rough edge point of the pectoral region.
The binomial function y (x, W) establishes a quadratic equation that fits a quadratic function as a contour curve of the pectoral muscle by a series of point pair values (x, y) in a known coordinate system, which are the rough edge points of the pectoral muscle region obtained by the previous processing.
An objective function E (W) is established, and the optimization solution is carried out on the objective function to obtain a parameter value W. After the W is obtained, the fitted curve shape can be obtained according to the binomial function.
The coordinates of each point of the contour line of the pectoral muscle were converted into rectangular coordinates, and fitting was performed using the above formula, to obtain the results shown in fig. 5.
d. And (3) under-segmentation filling treatment: after the contour line fitting result of the pectoral muscle is obtained, the final pectoral muscle region segmentation result shown in fig. 6 can be obtained by sequentially filling from top to bottom and from left to right according to the fitting line, and compared with the original image shown in fig. 7, the segmentation effect is better.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A robust breast molybdenum target MLO visual angle image pectoralis muscle segmentation method is characterized by comprising the following steps:
s1, off-line training
S1.1, sketching the pectoral muscle region contour in the multiple molybdenum target MLO visual angle images to obtain a pectoral muscle region contour binary mask image;
s1.2, performing depth network training on the molybdenum target MLO visual angle image and the corresponding pectoral muscle region contour binary mask image to obtain a depth network model;
s2, online segmentation
S2.1, carrying out pectoral muscle region segmentation on the molybdenum target MLO visual angle image through the depth network model to obtain an initial segmentation result;
and S2.2, optimizing the initial segmentation result through a polynomial curve fitting method and a filling algorithm to obtain a pectoral muscle segmentation result.
2. The robust breast molybdenum target MLO (multi-level image) visual angle image pectoral muscle segmentation method as claimed in claim 1, wherein:
in step S1.1, the step of outlining the pectoral muscle region contour in the molybdenum target MLO viewing angle image is to outline the pectoral muscle region contour in the molybdenum target MLO viewing angle image by LabelMe software.
3. The robust breast molybdenum target MLO (multi-level image) visual angle image pectoral muscle segmentation method as claimed in claim 1, wherein:
step S1.2 is specifically to carry out deep network training on the molybdenum target MLO visual angle image and the pectoral muscle region contour binary mask image by using any one of a U-Net network, an FCN network, a PSP network and a SegNet network.
4. The robust breast molybdenum target MLO visual angle image pectoral muscle segmentation method as claimed in claim 1,
step S1.1 and step S1.2 are preceded by step S0, which is to determine whether the molybdenum target image is an MLO view image, if yes, unify the molybdenum target images into a left image, and then execute step S1, otherwise, abandon the molybdenum target image;
the specific steps for judging whether the molybdenum target image is an MLO view image are as follows:
if the type of the molybdenum target image is FFDM, reading the visual angle information from the file name information of the molybdenum target image for judgment; and if the type of the molybdenum target image is CR or SFM, obtaining visual angle information by analyzing the image data for judgment.
5. The method as claimed in claim 4, wherein in step S0, the determining of the viewing angle information obtained by analyzing the image data is specifically performed by a character recognition method if the viewing angle information can be obtained by the character recognition method, or performed by an image recognition method if the viewing angle information can not be obtained by the image recognition method.
6. The method for segmenting pectoral muscles of robust breast molybdenum target MLO (multi-level-of-view) images as claimed in claim 5, wherein in step S0, the determination by the character recognition method specifically comprises:
sa, character positioning
Binarizing the molybdenum target image by an Otsu threshold value method to obtain a binarized image, analyzing 8 neighborhood connected domains of the binarized image, arranging the areas of non-breast areas from large to small, removing noise according to a preset threshold value, and taking the rest areas as character positioning target areas;
using a connected domain generation algorithm based on the combination of image gray gradient and the maximum stable extremum region, and performing preliminary filtering on a non-text region in the character positioning target region by analyzing the characteristics of the connected domain; calculating mixed features based on the binary image and Gabor filtering, and performing secondary classification on the text region and the non-text region by using a pre-trained shallow neural network to obtain a character position;
sb, character segmentation
Using the peripheral outline coordinates of the characters at the character positions, calculating the slope of the characters by adopting a geometric method, correcting the slope, binarizing by using an Otsu algorithm, and removing part of background by using a projection method to obtain a binarized area only containing character contents;
and searching candidate segmentation points in the binarization area by using a left and right contour extreme value method, calculating the minimum value of the difference between the coordinates of the leftmost contour and the rightmost contour of the characters in the binarization area only containing character content, taking the minimum value as an initial segmentation point, determining the optimal segmentation path of the character text by using a dripping algorithm and a shortest path method, segmenting the character text, identifying characters in the character text by using a convolutional neural network, combining the characters into character strings, and if the character strings contain MLO, obtaining an MLO view angle image, otherwise, obtaining a non-MLO view angle image.
7. The method for segmenting pectoral muscles of robust breast molybdenum target MLO (MLO) visual angle images as claimed in claim 6, wherein in step S0, the determination of the visual angle information obtained by the image recognition method specifically comprises:
sc, obtaining a breast mask image
Binarizing the molybdenum target image by a threshold method, analyzing 8 neighborhood connected domains of the molybdenum target image, and reserving a region with the largest area in an analysis result, namely a breast mask image;
sd, unifying the molybdenum target image into a left image
Projecting the breast mask image to the Y direction in a space rectangular coordinate system, recording the maximum value in a projection curve as an A point, and recording a straight line perpendicular to an X axis through the A point, dividing the image into two parts through the straight line, wherein the gray value accumulation sum of pixel points in the left half part is recorded as LA, the gray value accumulation sum of pixel points in the right half part is recorded as RA, if LA is greater than RA, the molybdenum target image is in the left position, otherwise, the molybdenum target image is in the right position;
se, visual angle for identifying molybdenum target image
And (3) taking a linear kernel support vector machine as a classifier, and performing feature recognition on the molybdenum target image through a local binary pattern to obtain the visual angle information of the molybdenum target image.
8. The robust breast molybdenum target MLO visual angle image pectoral muscle segmentation method as claimed in claim 7,
step S1.2 is specifically to use a U-NET full convolution network to carry out deep network training on the molybdenum target MLO visual angle image and the pectoral muscle region contour binary mask image, wherein in the U-NET full convolution network, a convolution kernel is 3x3, an activation function is a correction linear unit, the pooling operation is maximum pooling, and a loss function uses a binary cross entropy loss function.
9. The robust breast molybdenum target MLO (multi-level object) visual angle image pectoral muscle segmentation method as claimed in claim 8, wherein the step S2.2 specifically comprises:
s2.2.1, performing binarization processing on the initial segmentation result by an Otsu threshold method;
s2.2.2, scanning the pixel points of the image subjected to binarization line by line, taking the pixel point with the first gray value of 0 in each line as the edge point of each line, and connecting the edge points of each line to obtain an initial edge curve;
s2.2.3, converting coordinates of each point in the initial edge curve into rectangular coordinates, and fitting by adopting a least square method to obtain an optimized edge curve;
s2.2.4, filling the optimized edge curves from top to bottom and from left to right in sequence to obtain the pectoral muscle segmentation result.
CN202111153500.5A 2021-09-29 2021-09-29 Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method Pending CN113935961A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111153500.5A CN113935961A (en) 2021-09-29 2021-09-29 Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111153500.5A CN113935961A (en) 2021-09-29 2021-09-29 Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method

Publications (1)

Publication Number Publication Date
CN113935961A true CN113935961A (en) 2022-01-14

Family

ID=79277255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111153500.5A Pending CN113935961A (en) 2021-09-29 2021-09-29 Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method

Country Status (1)

Country Link
CN (1) CN113935961A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330756A (en) * 2022-10-11 2022-11-11 天津恒宇医疗科技有限公司 Light and shadow feature-based guide wire identification method and system in OCT image
CN116363155A (en) * 2023-05-25 2023-06-30 南方医科大学南方医院 Intelligent pectoral large muscle region segmentation method, device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330756A (en) * 2022-10-11 2022-11-11 天津恒宇医疗科技有限公司 Light and shadow feature-based guide wire identification method and system in OCT image
CN116363155A (en) * 2023-05-25 2023-06-30 南方医科大学南方医院 Intelligent pectoral large muscle region segmentation method, device and storage medium
CN116363155B (en) * 2023-05-25 2023-08-15 南方医科大学南方医院 Intelligent pectoral large muscle region segmentation method, device and storage medium

Similar Documents

Publication Publication Date Title
CN108805209B (en) Lung nodule screening method based on deep learning
CN109345527B (en) Bladder tumor detection method based on MaskRcnn
CN110472616B (en) Image recognition method and device, computer equipment and storage medium
WO2022063199A1 (en) Pulmonary nodule automatic detection method, apparatus and computer system
EP1820141B1 (en) Multiscale variable domain decomposition method and system for iris identification
JP5279245B2 (en) Method and apparatus for detection using cluster change graph cut
CN108985345B (en) Detection apparatus based on lung medical image fusion classification
CN111507932B (en) High-specificity diabetic retinopathy characteristic detection method and storage device
GB2415563A (en) Lesion boundary detection
CN111598875A (en) Method, system and device for building thyroid nodule automatic detection model
CN113935961A (en) Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method
CN110767292A (en) Pathological number identification method, information identification method, device and information identification system
CN112991365B (en) Coronary artery segmentation method, system and storage medium
CN111767952A (en) Interpretable classification method for benign and malignant pulmonary nodules
JP2023517058A (en) Automatic detection of tumors based on image processing
CN114283165A (en) Intelligent image processing system for pulmonary nodules
CN112087970A (en) Information processing apparatus, information processing method, and computer program
CN113160185A (en) Method for guiding cervical cell segmentation by using generated boundary position
CN115222713A (en) Method and device for calculating coronary artery calcium score and storage medium
CN114972272A (en) Grad-CAM-based segmentation method for new coronary pneumonia lesions
Lee et al. Hybrid airway segmentation using multi-scale tubular structure filters and texture analysis on 3D chest CT scans
CN112967262B (en) Urine sediment tube type identification method based on morphological segmentation and deep learning
CN110570405A (en) pulmonary nodule intelligent diagnosis method based on mixed features
CN111062909A (en) Method and equipment for judging benign and malignant breast tumor
CN113780421B (en) Brain PET image identification method based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination