WO2020077962A1 - 一种乳腺影像识别的方法及装置 - Google Patents

一种乳腺影像识别的方法及装置 Download PDF

Info

Publication number
WO2020077962A1
WO2020077962A1 PCT/CN2019/082690 CN2019082690W WO2020077962A1 WO 2020077962 A1 WO2020077962 A1 WO 2020077962A1 CN 2019082690 W CN2019082690 W CN 2019082690W WO 2020077962 A1 WO2020077962 A1 WO 2020077962A1
Authority
WO
WIPO (PCT)
Prior art keywords
breast
image
convolution
feature
layer
Prior art date
Application number
PCT/CN2019/082690
Other languages
English (en)
French (fr)
Inventor
魏子昆
杨忠程
丁泽震
Original Assignee
杭州依图医疗技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州依图医疗技术有限公司 filed Critical 杭州依图医疗技术有限公司
Publication of WO2020077962A1 publication Critical patent/WO2020077962A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Definitions

  • Embodiments of the present invention relate to the field of machine learning technology, and in particular, to a method and device for breast image recognition.
  • breast imaging can use low-dose X-rays to examine human breasts. It can detect various breast tumors, cysts and other breast lesions, which helps to detect breast cancer early and reduce its mortality.
  • Breast imaging is an effective detection method that can be used to diagnose a variety of female breast-related diseases. Of course, the most important use is breast cancer, especially early breast cancer screening. Therefore, if you can effectively detect the early manifestations of various breast cancers on the breast image, it will be of great help to the doctor.
  • the doctor diagnoses the breast image through personal experience. This method is inefficient and subjective.
  • Embodiments of the present invention provide a method and a device for breast image recognition, which are used to solve the problems in the prior art that the efficiency of judging breast images through doctor experience is low, the recognition is subjective, and it is difficult to obtain accurate results.
  • An embodiment of the present invention provides a method for breast image recognition, including:
  • the breast image determine the region of interest ROI of the breast lesion in the breast image and the gland classification of the breast;
  • the classification of the breast image is determined.
  • the determining the signs of breast lesions of the ROI according to the ROI includes:
  • the first feature extraction module includes K convolution modules; each of the K convolution modules includes a first convolution layer in turn 2.
  • a second convolution layer the number of feature images output by the first convolution layer is less than the number of feature images input by the first convolution layer; the number of feature images output by the second convolution layer The number is greater than the number of feature images output by the first convolutional layer; K is greater than 0;
  • the characteristic image of the ROI is input to the classification module to determine the confidence degree of the breast lesion signs of the ROI.
  • the determining of the grade of the breast image according to the signs of breast lesions of the ROI and the classification of the glands of the breast includes:
  • the classification of the breast image is determined.
  • the determining the region of interest ROI of the breast lesion in the breast image according to the breast image includes:
  • the breast image determine the coordinates of the breast lesion in the breast image
  • the preset distance being a preset multiple of the radius of the breast lesion
  • the first preset distance is enlarged by a preset multiple; the second preset distance is less than or equal to the first preset distance.
  • the first feature extraction module further includes a downsampling module; the downsampling module includes the first convolution layer, the second convolution layer, the pooling layer, and the third convolution Layer; the determining the feature image of the ROI according to the first feature extraction module includes:
  • the first feature image and the second feature image are determined as the feature image output by the down-sampling module.
  • the first feature extraction module further includes a first convolution module, the first convolution module is located before the K convolution modules; the input of the breast image to the The first feature extraction module includes:
  • the first convolution module includes a convolution layer, a BN layer, a Relu layer and a pooling layer;
  • the size of the convolution kernel is larger than the size of the convolution sum in the N convolution modules;
  • the first convolution module includes multiple consecutive convolution layers, a BN layer, a Relu layer, and a pooling layer; the size of the convolution kernel of the first convolution module and the N convolution layers The size of the largest convolution kernel in the module is equal.
  • An embodiment of the present invention provides a device for breast image recognition, including:
  • Acquisition unit for acquiring mammary gland image
  • the processing unit is configured to determine the region of interest ROI of the breast lesion in the breast image and the gland classification of the breast based on the breast image; according to the ROI, determine the signs of the breast lesion of the ROI; The signs of breast lesions in the ROI and the classification of the breast glands determine the grade of the breast image.
  • the processing unit is specifically used to:
  • the feature extraction module train the breast lesions in the marked breast lesion area to determine the feature image of the ROI;
  • the feature extraction module includes N convolution modules; each volume of the N convolution modules
  • the convolution module includes a first convolution layer and a second convolution layer in sequence; the number of feature images output by the first convolution layer is less than the number of feature images input by the first convolution layer; the first The number of feature images output by the second convolutional layer is greater than the number of feature images output by the first convolutional layer; N is greater than 0; input the feature image of the ROI to the classification module to determine the breast lesion of the ROI The confidence of the signs.
  • an embodiment of the present invention provides a computing device, including at least one processing unit and at least one storage unit, wherein the storage unit stores a computer program, and when the program is executed by the processing unit, The processing unit performs any of the steps of the method described above.
  • an embodiment of the present invention provides a computer-readable storage medium that stores a computer program executable by a computing device, and when the program runs on the computing device, causes the computing device to execute any of the above A method step.
  • the convolutional neural network model since the feature image of the breast image is extracted, and the breast in each feature image is identified, the gland type of the breast, the breast lesion and the signs of the breast lesion can be quickly identified, and the accuracy of breast classification is improved.
  • the number of channels output by the first convolution layer is reduced, and the number of channels output by the second convolution layer is increased to the number of channels input by the first convolution layer, so that the convolution process , Effectively retain the effective information in the image, while reducing the amount of parameters, improve the effectiveness of the extraction of feature images, and thereby improve the accuracy of detecting breast grading in breast images.
  • FIG. 1a is a schematic diagram of a breast image provided by an embodiment of the present invention.
  • FIG. 1b is a schematic diagram of a breast image provided by an embodiment of the present invention.
  • 1c is a schematic diagram of a breast image provided by an embodiment of the present invention.
  • 1d is a schematic diagram of a breast image provided by an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for identifying breast lesions by breast imaging according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a breast image sign recognition provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a breast image recognition provided by an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a device for breast image recognition according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a computing device according to an embodiment of the present invention.
  • a breast X-ray image is taken as an example for an exemplary description, and other images will not be repeated here.
  • the mammary gland uses low-dose (about 0.7 mSv) X-rays to examine the breasts of humans (mainly women). It can detect various breast tumors, cysts and other breast lesions, which helps to detect breast cancer early and reduce its mortality rate. Some countries encourage older women (generally over 45 years old) to perform mammography regularly (with intervals ranging from one year to five years) to screen for early breast cancer.
  • the mammary gland image generally includes four X-ray images, which are four mammary gland images of the two projection positions of the two breasts (head and tail position CC, medial and lateral oblique MLO), as shown in Figures 1a, 1b, 1c, and 1d. .
  • breast screening In general, the purpose of breast screening is to prevent breast cancer, so when a breast lesion is found, doctors often hope to be able to diagnose the malignancy of breast cancer.
  • diagnosis of breast lesions on images is often based on the detection of signs of breast lesions. Signs of breast lesions are generally divided into calcification, mass / asymmetry, and structural distortion. For the same breast lesions, these signs may exist at the same time.
  • Existing methods are generally divided into two categories, one is through some graphics methods, try to extract some basic features from the image, such as calcification, mass and other signs of breast lesions.
  • the method is simple, but at the same time it is difficult to obtain the semantic information of breast lesions, so that the accuracy of extraction is poor, and it is easily interfered by various benign similar signs. Robustness is also poor.
  • Another way is to use some unsupervised methods to try to use the machine to extract some features of breast lesions from the image, but these features lack actual semantic information, and it is difficult for doctors to make a diagnostic diagnosis based on this information. This sign is not medically valuable. Big.
  • the prior art often only detects a single type of breast lesions such as calcifications or masses, and cannot simultaneously detect multiple breast lesions at the same time, and the application range is narrow.
  • the image-based primary feature method is used. This method is relatively simple, and the accuracy of detection is also poor.
  • an embodiment of the present invention provides a method for breast image recognition, as shown in FIG. 2, including:
  • Step 201 Obtain a breast image
  • Step 202 Determine the region of interest ROI of the breast lesion in the breast image and the gland classification of the breast based on the breast image;
  • Step 203 Determine the signs of breast lesions of the ROI according to the ROI;
  • Step 204 Determine the classification of the breast image according to the signs of breast lesions in the ROI and the gland classification of the breast.
  • an embodiment of the present invention provides a method for breast image recognition, as shown in FIG. 3, including the following step:
  • Step 301 Obtain a breast image
  • Step 302 Input the mammary gland image into the second feature extraction module to obtain feature images of different sizes of the mammary gland image;
  • the second feature extraction module includes N convolution modules; the N convolution modules are down-sampling convolution blocks and / or up-sampling convolution blocks; each down-sampling convolution block or up-sampling convolution block The size of the feature images extracted by the blocks are different.
  • Each of the N convolution modules includes a first convolution layer and a second convolution layer; each of the feature images output by the first convolution layer The number is less than the number of feature images input by the first convolutional layer; the number of feature images output by the second convolutional layer is greater than the number of feature images output by the first convolutional layer; N is greater than 0 ;
  • the second feature extraction module may include three down-sampling convolution blocks.
  • Each convolution module may include a first convolution layer and a second convolution layer.
  • the first convolution layer includes a convolution layer, a normalization (BN) layer connected to the convolution layer, and a connection to the BN layer Activation function layer.
  • BN normalization
  • the step of the feature image passing through the convolution module may include:
  • Step 1 input the feature image input by the convolution module to the first convolution layer to obtain the first feature image;
  • the convolution kernel of the first convolution layer may be N1 * m * m * N2;
  • N1 is The number of channels of the feature image input by the convolution module,
  • N2 is the number of channels of the first feature image;
  • Step 2 Input the first feature image into the second convolution layer to obtain the second feature image;
  • the convolution kernel of the first convolution layer may be N2 * m * m * N3;
  • N3 is the channel of the second feature image Number; N3> N2;
  • Step 3 After combining the feature image input by the convolution module and the second feature image, it is determined as the feature image output by the convolution module.
  • the method for determining the feature image corresponding to the breast image described above is only one possible implementation manner. In other possible implementation manners, the feature image corresponding to the breast image may also be determined by other methods, which is not specifically limited.
  • the activation function in the embodiment of the present invention may be multiple types of activation functions, for example, it may be a linear rectification function (Rectified Linear Unit, ReLU), which is not specifically limited;
  • ReLU Rectified Linear Unit
  • the second feature extraction module in the embodiment of the present invention may be a feature extraction module in a (2Dimensions, 2D) convolutional neural network.
  • the first volume The size of the convolution kernel of the layer can be m * m, and the size of the convolution kernel of the second convolution layer can be n * n; m and n can be the same or different, which is not limited here; where, m, n are Integer greater than or equal to 1.
  • the number of feature images output by the first convolution layer is less than the number of feature images input by the first convolution layer; the number of feature images output by the second convolution layer is greater than the first convolution layer The number of output feature images.
  • a third convolution layer is further included between the first convolution layer and the second convolution layer; the third convolution layer
  • the input feature image is an image output by the first convolution layer
  • the feature image output by the third convolution layer is an image input by the second convolution layer.
  • the size of the convolution kernel of the third convolutional layer may be k * k, and k may be the same as m or n, or may be different, which is not limited herein.
  • the size of the convolution kernel of the first convolution layer is 3 * 3; the size of the convolution kernel of the second convolution layer is 3 * 3; the third convolution layer The size of the convolution kernel is 1 * 1.
  • the perception field of feature extraction can be effectively improved, which is beneficial to improve the accuracy of breast image recognition.
  • the feature images of different sizes may be feature images of different pixels, for example, the feature image with pixels 500 ⁇ 500 and the feature image with pixels 1000 ⁇ 1000 are feature images with different sizes.
  • a pre-trained breast lesion detection model is used to extract feature images of different sizes of breast images.
  • the model is determined after training a plurality of labeled breast images using a 2D convolutional neural network.
  • the image is scaled to a specific size so that the scale of the pixels in each direction is the same as the actual length.
  • the second feature extraction module includes N / 2 down-sampling convolution blocks and N / 2 up-sampling convolution blocks; the acquiring feature images of different sizes of the breast image, include:
  • the first feature image output from the N / 2th down-sampling convolution block is used to sequentially extract N / 2 second feature images of the mammary gland image through the N / 2 up-sampling convolution block, each up-sampling convolution block
  • the sizes of the extracted second feature images are different;
  • N feature images of different sizes of the breast images are determined.
  • the second feature extraction module also includes a feature preprocessing module before; the feature preprocessing module includes a convolution layer and a BN Layer, a Relu layer and a pooling layer; the size of the convolution kernel of the feature preprocessing module is larger than the size of the convolution kernel of any one of the N convolution modules.
  • the size of the convolution kernel of the convolution layer may be 5 * 5, and the interval is 2 pixels.
  • the pooling layer is 2 * 2 maximum pooling.
  • the feature preprocessing module includes a plurality of continuous convolutional layers, a BN layer, a Relu layer, and a pooling layer; the size of the convolution kernel of the feature preprocessing module and the N The largest convolution kernel in each convolution module has the same size.
  • the step of passing the feature image through the feature preprocessing module may include: inputting the breast image to the feature preprocessing module to obtain a preprocessed feature image; and using the preprocessed feature image as an input of the second feature extraction module.
  • Step 303 For any one of the feature images of different sizes of the breast image, determine the breast lesion recognition frame from the feature image.
  • a pre-trained breast lesion detection model is used to determine the breast lesion recognition frame from the feature image.
  • the breast lesion detection model is determined after training multiple breast images of the marked breast lesion using a 2D convolutional neural network. .
  • the area framed by the breast lesion identification frame determined from the feature image does not necessarily contain breast lesions, so each breast lesion identification frame needs to be screened according to the breast lesion probability of the breast lesion identification frame, and the breast lesion probability is less than the preset threshold
  • the breast lesion identification frame is deleted, where the breast lesion probability is the probability that the area framed by the breast lesion identification frame is the breast lesion.
  • Step 304 Determine the breast lesion of the breast image according to the breast lesion identification frame determined from each feature image.
  • the recognition frame is output as the breast lesion in the breast image
  • the output breast lesion parameters include the central coordinates of the breast lesion and the diameter of the breast lesion, wherein the central coordinates of the breast lesion are the breast lesion
  • the center coordinate of the frame, the diameter of the breast lesion is the distance from the center of the breast lesion identification frame to one of the faces.
  • both large-sized breast lesions and small-sized breast lesions can be detected, which improves the detection of breast lesions Precision.
  • the method of automatically detecting the breast lesion in the present application effectively improves the detection efficiency of the breast lesion.
  • the breast lesion identification frame determined from each feature image may have multiple identification frames corresponding to one breast lesion, if the number of breast lesions in the breast image is directly determined according to the number of breast lesion identification frames, the number of detected breast lesions will result There is a large deviation, so it is necessary to convert each feature image into a feature image of the same size and align it, and then screen the breast lesion recognition frame determined from each feature image, and determine the screened breast lesion recognition frame as the breast Breast lesions in the image.
  • the mammary gland image includes mammary gland images with different irradiation positions of different breasts; the inputting the mammary gland image to a second feature extraction module includes: :
  • the breast lesion identification frame is determined from the feature image; including:
  • the first breast lesion identification frame is deleted.
  • Step 1 Obtain breast images as training samples.
  • the acquired multiple breast images can be used directly as training samples, or the acquired multiple breast images can be enhanced to expand the data volume of the training samples. Enhancement operations include, but are not limited to: randomly setting pixels up, down, left, and right (Such as 0-20 pixels), random rotation setting angle (such as -15-15 degrees), random zoom setting multiple (such as 0.85-1.15 times).
  • Step 2 Manually mark the breast lesions in the training sample.
  • the breast lesions in the training sample can be marked by doctors and other professionals, and the content of the marking includes the central coordinates of the breast lesions and the diameter of the breast lesions. Specifically, multiple doctors can mark the breast lesions, and determine the final breast lesions and the parameters of the breast lesions through a multiple-vote synthesis method, and the results are saved in the form of a mask.
  • the manual labeling of the breast lesions in the training sample and the training sample are in no particular order of enhancement. You can manually mark the breast lesions in the training sample, and then the training sample of the labeled breast lesions can be enhanced. The training samples are enhanced, and then the training samples after the enhancement operations are manually marked.
  • Step 3 Input the training samples to the convolutional neural network corresponding to the second feature extraction module for training, and determine the breast lesion detection model.
  • the structure of the convolutional neural network includes an input layer, a down-sampling convolution block, an up-sampling convolution block, a target detection network, and an output layer. Pre-process the training samples and input them into the convolutional neural network, calculate the loss function of the output breast lesions and the mask image of the pre-labeled training samples, and then iterate iteratively using the back propagation algorithm and the sgd optimization algorithm to determine the breast lesions Detection model.
  • the process of extracting feature images of different sizes of breast images using the breast lesion detection model determined by the above training includes the following steps:
  • the mammary gland image is successively passed through N / 2 down-sampling convolution blocks to extract the first feature images of the N mammary gland images.
  • the size of the first feature image extracted by each down-sampling convolution block is different, and N / 2 is greater than 0.
  • the down-sampling convolution block includes a first convolution layer and a second convolution layer, a group connection layer, a front-back connection layer, and a down-sampling layer.
  • Step 2 The first feature image output from the N / 2 down-sampling convolution block is sequentially used to extract the second feature image of N / 2 mammary gland images through the N / 2 up-sampling convolution block.
  • the size of the second feature image extracted by each up-sampling convolution block is different.
  • the up-sampling convolution block includes a convolution layer, a group connection layer, a front-back connection layer, an up-sampling layer, and a synthesis connection layer.
  • Convolution layer includes convolution operation, batch normalization layer and RELU layer.
  • step three after combining the first feature image and the second feature image with the same size, feature images of different sizes of N / 2 breast images are determined.
  • the first feature image and the second feature image of the same size are combined through the up-sampling convolution block to determine feature images of different sizes.
  • the number of channels of the first feature image and the second feature image are combined, and the size of the feature image obtained after the merging is the same as the size of the first feature image and the second feature image.
  • the process of determining the breast lesion recognition frame from the feature image using the breast lesion detection model determined by the above training includes the following steps:
  • Step 1 For any pixel in the feature image, with the pixel as the center, diffuse to the surroundings to determine the first area.
  • Step 2 Set multiple preset frames in the first area according to preset rules.
  • the preset frame can be set to various shapes.
  • the preset rule may be that the center of the preset frame coincides with the center of the first area, or that the corner of the preset frame coincides with the angle of the first area, and so on.
  • the way to select the preset frame of the breast lesion is that, for each pixel of each feature map, it is considered as an anchor point. Set multiple preset frames with different aspect ratios on each anchor point.
  • the convolution of the feature map predicts a coordinate and size offset and confidence, and the preset frame is determined based on the coordinate and size offset and confidence.
  • Step 3 For any preset frame, predict the position deviation of the preset frame from the first area.
  • Step 4 Adjust the preset frame according to the position deviation to determine the breast lesion identification frame, and predict the breast lesion probability of the breast lesion identification frame.
  • the breast lesion probability is the probability that the area selected by the breast lesion identification frame is the breast lesion.
  • the specific training process may include: inputting the training data image to the above-mentioned convolutional neural network for calculation.
  • multiple images of breast lesions with different window widths and window positions are introduced.
  • the prediction frame set with the highest confidence and the prediction frame set with the largest coincidence with the training samples are selected.
  • the cross-entropy of the confidence of the prediction frame and the labeling of the sample, and the cross-entropy of the labeled breast lesion and the offset of the prediction frame of the training sample, the weighted sum of the two is used as the loss function.
  • the training optimization algorithm uses the sgd algorithm with momentum and step attenuation.
  • the input image is preprocessed to improve the effect of feature extraction.
  • the acquiring breast image includes:
  • Step 1 Determine the binary image of the breast image according to Gaussian filtering
  • Step 2 Obtain the connected area of the binarized image, and use the area with the largest area in the connected area corresponding to the breast image as the segmented breast image;
  • Step 3 Add the segmented breast image to a preset image template to generate a pre-processed breast image; and use the pre-processed breast image as input to the second feature extraction module's breast image.
  • the input of the preprocessing module is a breast image saved in Dicom format.
  • Preprocessing can include gland segmentation and image normalization; the main purpose of gland segmentation is to extract the breast part of the input breast image to remove other unrelated interference images; image normalization is to normalize the image into Unified format images, specifically, include:
  • the specific binarized threshold can be obtained by finding the maximum class interval of the grayscale histogram of the image.
  • the binarized result can be obtained by flooding to obtain independent regional blocks, and the area of each regional block is counted; the area on the image corresponding to the largest regional block is used as Segmented breast image.
  • the preset image template may be a square image with a black bottom plate; specifically, the obtained divided breast image may be expanded into a 1: 1 square image by adding a black border.
  • the output breast image can be scaled by pixels, for example, the image difference can be scaled to 4096 pixels ⁇ 4096 pixels.
  • the window width and position of the mammary gland can be adjusted to obtain a better identification effect of mammary gland image recognition.
  • the method before inputting the breast image to the second feature extraction module, the method further includes:
  • the breast image according to the picture format corresponding to the at least one set of window widths and window levels is used as the breast image input to the second feature extraction module.
  • the dicom image can be converted into a png image through three sets of window width and window levels.
  • the first set of window width is 4000 and the window level is 2000; the second set of window width is 1000; the window level is 2000 ;
  • the third group has a window width of 1500 and a window level of 1500.
  • a method for breast image sign recognition provided by an embodiment of the present invention, specific steps of the process include:
  • Step 1 Obtain the coordinates of breast lesions in the breast image.
  • the breast image is a two-dimensional image.
  • the two-dimensional coordinates of the breast lesion may be the two-dimensional coordinates of the points within the breast lesion (such as the two-dimensional coordinates of the center point of the breast lesion) or the two-dimensional coordinates of the points on the surface of the breast lesion.
  • Breast lesions include but are not limited to breast lesions.
  • Step 2 Determine the region of interest ROI containing the breast lesion from the breast image according to the coordinates of the breast lesion.
  • the preset distance is extended to the surroundings to determine the identification frame containing the breast lesion, and the preset distance is a preset multiple of the radius of the breast lesion, such as 1.25 times the radius of the breast lesion. Then intercept this recognition frame and interpolate it to a certain size.
  • a spatial information channel can be added to each pixel in the recognition frame to output the ROI of the region of interest.
  • the spatial information channel is the distance between the pixel and the two-dimensional coordinates of the breast lesion.
  • a possible implementation manner if it is determined that the radius of the breast lesion is greater than a second preset distance, the first preset distance is enlarged by a preset multiple; the second preset distance is less than or equal to the first preset distance.
  • the first preset distance is a 768 * 768 size image; according to the breast lesion coordinates, a 768 * 768 size image is cut out as an ROI.
  • the second preset distance can be 640 * 640; if the breast lesion size exceeds 640 * 640, the ROI is adjusted to a size of 1.2 times; and then zoomed to a 768 * 768 size image.
  • Step three segment the breast lesion area from the breast image according to the ROI and the breast lesion detection model.
  • Breast lesion detection model is determined after training multiple breast images of marked breast lesion area using convolutional neural network.
  • the breast image may be directly input into the breast lesion detection model, and the breast lesion area may be output through the breast lesion detection model.
  • the ROI in the breast image may be input into the breast lesion detection model, and the breast lesion area may be output through the breast lesion detection model.
  • the size of the ROI can be set according to the actual situation. Since the region of interest including the breast lesion is determined from the breast image based on the two-dimensional coordinates of the breast lesion, the area of detection of the breast lesion is reduced.
  • the method of determining the breast lesion area by inputting the breast image into the breast lesion detection model, and entering the ROI into the breast lesion detection model to determine the breast lesion area can effectively improve the detection accuracy and detection efficiency of the breast lesion area.
  • a process of a method for identifying breast image signs provided by an embodiment of the present invention.
  • the process may be performed by a device for identifying breast image signs. As shown in FIG. 4, the specific steps of the process include:
  • Step 401 Obtain a breast image and the coordinates of a breast lesion in the breast image
  • Step 402 Determine the region of interest ROI including the breast lesion from the breast image according to the coordinates of the breast lesion;
  • Step 403 Input the ROI into the first feature extraction module to determine a feature image of breast lesion signs
  • the first feature extraction module includes K convolution modules; each of the N convolution modules includes a first convolution layer and a second convolution layer in sequence; the first convolution The number of feature images output by the layer is less than the number of feature images input by the first convolution layer; the number of feature images output by the second convolution layer is greater than or equal to that of the first convolution layer The number of feature images; K is a positive integer;
  • Step 404 Input the feature image output by the first feature extraction module into the classification module, and determine the signs of the breast lesions.
  • the first feature extraction module used in the embodiment of the present invention is obtained by training a large amount of data, so that the results obtained by the model are more reasonable and have a certain scientific basis. Compared with the traditional method of doctor diagnosis, it can reduce the diagnostic error rate caused by differences in doctor levels, thereby improving the accuracy of determining signs of breast lesions; further, because the feature image of each ROI in the breast image is extracted, you can The rapid identification of signs of breast lesions improves the efficiency of the identification of signs of breast lesions.
  • the effective information in the image is effectively retained during the convolution process , While reducing the amount of parameters, it improves the effectiveness of feature image extraction, thereby improving the accuracy of detecting signs of breast lesions.
  • the parameters of the first feature extraction module may be obtained by training breast images of multiple patients.
  • the first feature extraction module may be a shallow feature extraction module or a deep feature extraction module, that is, the feature extraction neural network may include K convolution modules, and K is less than or equal to the first threshold.
  • K is less than or equal to the first threshold.
  • a person skilled in the art may set the specific value of the first threshold according to experience and actual conditions, which is not limited herein.
  • the first feature extraction module may include three convolution modules.
  • Each convolution module may include a first convolution layer and a second convolution layer.
  • the first convolution layer includes a convolution layer, a normalization (BN) layer connected to the convolution layer, and a connection to the BN layer Activation function layer.
  • BN normalization
  • the step of the feature image passing through the convolution module may include:
  • Step 1 input the feature image input by the convolution module to the first convolution layer to obtain the first feature image;
  • the convolution kernel of the first convolution layer may be N1 * m * m * N2;
  • N1 is The number of channels of the feature image input by the convolution module,
  • N2 is the number of channels of the first feature image;
  • Step 2 Input the first feature image into the second convolution layer to obtain the second feature image;
  • the convolution kernel of the first convolution layer may be N2 * m * m * N3;
  • N3 is the channel of the second feature image Number; N3> N2;
  • Step 3 After combining the feature image input by the convolution module and the second feature image, it is determined as the feature image output by the convolution module.
  • N1 N2.
  • the method for determining the feature image corresponding to the breast image described above is only one possible implementation manner. In other possible implementation manners, the feature image corresponding to the breast image may also be determined by other methods, which is not specifically limited.
  • the activation function in the embodiment of the present invention may be multiple types of activation functions, for example, it may be a linear rectification function (Rectified Linear Unit, ReLU), which is not specifically limited;
  • ReLU Rectified Linear Unit
  • the first feature extraction module in the embodiment of the present invention may be the first feature extraction module in the (2Dimensions, 2D) convolutional neural network.
  • the first The size of the convolution kernel of a convolution layer can be m * m
  • the size of the convolution kernel of the second convolution layer can be n * n
  • m and n can be the same or different, which is not limited here; where, m, n is an integer greater than or equal to 1.
  • the number of feature images output by the first convolution layer is less than the number of feature images input by the first convolution layer; the number of feature images output by the second convolution layer is greater than or equal to the first volume The number of feature images input by the layer.
  • a third convolution layer is further included between the first convolution layer and the second convolution layer; the third convolution layer
  • the input feature image is an image output by the first convolution layer
  • the feature image output by the third convolution layer is an image input by the second convolution layer.
  • the size of the convolution kernel of the third convolutional layer may be k * k, and k may be the same as m or n, or may be different, which is not limited herein.
  • the size of the convolution kernel of the first convolution layer is 3 * 3; the size of the convolution kernel of the second convolution layer is 3 * 3; the third convolution layer The size of the convolution kernel is 1 * 1.
  • the perception field of feature extraction can be effectively improved, which is beneficial to improve the accuracy of the signs of breast lesions.
  • the first feature extraction module further includes L downsampling modules; each of the L downsampling modules Including the first convolutional layer, the second convolutional layer, the pooling layer and the fourth convolutional layer; the step of the feature image passing through the downsampling module may include:
  • Step 1 input the feature image of the down-sampling module to the first convolution layer and the second convolution layer and the pooling layer in sequence to obtain the first feature image;
  • the input feature image may be sequentially passed through the first convolution layer and the fourth convolution layer, the number of channels of the output feature image is reduced, and then the feature image is increased from passing through a second convolution layer The number of channels back to the original feature map.
  • the feature image output by the second convolutional layer is input to the pooling layer, and the pixel size of the feature image is reduced to half of the input through an average pooling of 2 * 2 to obtain the first feature image.
  • Step 2 input the feature image of the down-sampling module to the fourth convolution layer to obtain the second feature image;
  • the convolution step of the fourth convolution layer is set to 2, the pixel size of the second feature image is half the pixel size of the input feature image; the size of the convolution kernel may be the same as the size of the first convolution layer, It can also be different and is not limited here.
  • Step 3 After merging the first feature image and the second feature image, it is determined as the feature image output by the down-sampling module.
  • the first feature extraction module also includes a feature preprocessing module; the feature preprocessing module includes a convolution layer and a BN Layer, a Relu layer and a pooling layer; the size of the convolution kernel of the feature preprocessing module is larger than the size of the convolution kernel of any one of the N convolution modules.
  • the step of the feature image passing through the feature preprocessing module may include: inputting the breast image to the feature preprocessing module to obtain a preprocessed feature image; and using the preprocessed feature image as an input of the first feature extraction module.
  • the size of the convolution kernel of the convolution layer may be 5 * 5, and the interval is 2 pixels.
  • the pooling layer is 2 * 2 maximum pooling.
  • An embodiment of the present invention provides a structure of a classification module.
  • the classification module includes an average pooling layer, a dropout layer, a fully connected layer, and a softmax layer.
  • the feature vector corresponding to the patient to be diagnosed can be sequentially calculated through the average pooling layer, the dropout layer, and the fully connected layer, and then classified by the softmax layer to output the classification results, thereby obtaining the signs of the breast lesions of the patient.
  • the feature map is extracted into a feature vector. Then pass the feature vector through a layer of dropout, fully connected layer and softmax layer to obtain a two-dimensional classification confidence vector (including: calcification, mass / asymmetry, and structural distortion). Each bit represents the confidence level of this type, and the sum of all confidence levels is 1. The bit with the highest confidence level is output, and the type represented by this bit is the breast sign predicted by the algorithm.
  • classification module provided by the embodiment of the present invention is only one possible structure. In other examples, those skilled in the art may modify the content of the classification module provided by the embodiment of the present invention.
  • the classification module may include 2 A fully connected layer is not limited.
  • the first feature extraction module and the classification module can be trained as a neural network classification model.
  • the feature vectors corresponding to multiple patients can be input to the initial neural network classification
  • the predicted gland classification corresponding to each breast image is obtained, and the neural network classification model is generated by performing reverse training according to the results of the breast lesion signs of the annotated breast image.
  • Step 1 Obtain breast images as training samples.
  • the acquired multiple breast images can be used directly as training samples, or the acquired multiple breast images can be enhanced to expand the data volume of the training samples. Enhancement operations include, but are not limited to: randomly setting pixels up, down, left, and right (Such as 0-20 pixels), random rotation setting angle (such as -15-15 degrees), random zoom setting multiple (such as 0.85-1.15 times).
  • Step 2 Manually mark the signs in the breast lesion area in the training sample.
  • Training samples can be marked by professionals such as doctors. Specifically, multiple doctors can mark the signs of the breast lesion area, and determine the final breast lesion area through a multiple-vote synthesis method, and the result is saved in the form of a mask. It should be noted that the signs of the breast lesion area in the training sample and the enhancement operation of the training sample are in no particular order. You can manually mark the breast lesion area in the training sample, and then perform the training sample to mark the sign of the breast lesion area. For the enhancement operation, the training sample may be first enhanced, and then the training sample after the enhancement operation may be manually marked.
  • step three the training samples are input into the convolutional neural network for training to determine the recognition model of breast lesion signs.
  • the breast image with the signs of the breast lesion area can be directly input as a training sample to the convolutional neural network for training to determine the breast lesion sign recognition model.
  • the breast image of the marked breast lesion area can be processed and input as a training sample to the convolutional neural network for training to determine the breast lesion sign recognition model.
  • the specific process is: for any marked In the breast image of the breast lesion area, manually mark the two-dimensional coordinates of the breast lesion in the breast image, and then take the two-dimensional coordinates of the breast lesion as the center and extend the preset distance to the surrounding to determine the identification frame containing the breast lesion. The preset multiple of the radius of the breast lesion.
  • a spatial information channel is added to each pixel in the recognition frame to determine the region of interest ROI. The spatial information channel is the distance between the pixel and the two-dimensional coordinates of the breast lesion.
  • the ROI of the marked breast lesion area is used as a training sample and input to the convolutional neural network for training to determine the breast lesion sign recognition model.
  • the accuracy of the sign recognition in the breast lesion can be further improved according to the area of the breast lesion.
  • the process of determining the breast lesion signs in the breast image using the breast lesion sign recognition model determined by the above training includes the following steps:
  • the ROI is sequentially used to extract the feature image of the ROI through K first feature extraction blocks, K is greater than 0.
  • Step 2 Pass the feature image of the ROI through the global average pooling layer to extract the feature map into a feature vector. Then pass the feature vector through a layer of dropout, fully connected layer and sigmoid layer to obtain a two-dimensional classification confidence vector.
  • Step 3 Determine the signs of breast lesions according to the two-dimensional classification confidence vector of ROI.
  • each bit is expressed as a type of confidence.
  • Set the cut-off threshold for each type and use the category with confidence greater than the threshold as the sign of this breast lesion. That is, a bit higher than the threshold is output, and the type represented by this bit is the sign of the breast lesion predicted by the model.
  • a possible implementation manner includes:
  • Step 1 Input the confidence degree of the breast lesion signs of the ROI and the classification results of the breast gland into multiple classifiers, the multiple classifiers are used to determine each level in the classification of the breast image The confidence level of the 2 classification;
  • Step 2 Determine the classification of the breast image according to the classification results of the multiple classifiers.
  • breast grading can usually include 0-6 levels
  • the classifier can be set to 5 classifiers
  • each classifier is a two-class classifier, for example, the type of the first classifier output is less than The confidence level of or equal to level 0, and the confidence level greater than level 0; the type of the output of the second classifier is the level of confidence less than or equal to level 1, and the level of confidence is greater than level 1; the type of the output of the third classifier Is the confidence level less than or equal to level 2 and the confidence level greater than level 2; the type of the fourth classifier output is the confidence level less than or equal to level 3 and the confidence level greater than level 3; the output from the fifth classifier
  • the type of is a confidence level less than or equal to level 4 and a confidence level greater than level 4;
  • an embodiment of the present invention provides a device for breast image recognition. As shown in FIG. 5, the device can perform the flow of a method for breast image recognition.
  • the device includes an acquisition module 501 and a processing module 502.
  • the obtaining unit 501 is used to obtain breast images
  • the processing unit 502 is configured to determine the region of interest ROI of the breast lesion in the breast image and the gland classification of the breast according to the breast image; and determine the signs of the breast lesion of the ROI according to the ROI; According to the signs of breast lesions in the ROI and the classification of the breast glands, the classification of the breast image is determined.
  • processing unit 502 is specifically configured to:
  • the feature extraction module train the breast lesions in the marked breast lesion area to determine the feature image of the ROI;
  • the feature extraction module includes N convolution modules; each volume of the N convolution modules
  • the convolution module includes a first convolution layer and a second convolution layer in sequence; the number of feature images output by the first convolution layer is less than the number of feature images input by the first convolution layer; the first The number of feature images output by the second convolutional layer is greater than the number of feature images output by the first convolutional layer; N is greater than 0; input the feature image of the ROI to the classification module to determine the breast lesion of the ROI The confidence of the signs.
  • processing unit 502 is specifically configured to:
  • the classification of the breast image is determined.
  • processing unit 502 is specifically configured to:
  • the breast image determine the coordinates of the breast lesion in the breast image
  • the preset distance being a preset multiple of the radius of the breast lesion
  • the first preset distance is enlarged by a preset multiple; the second preset distance is less than or equal to the first preset distance.
  • the first feature extraction module further includes a downsampling module; the downsampling module includes the first convolution layer, the second convolution layer, the pooling layer, and the third convolution Layer; the processing unit 502 is specifically used for:
  • the first feature image and the second feature image are determined as the feature image output by the down-sampling module.
  • the first feature extraction module further includes a first convolution module, where the first convolution module is located before the K convolution modules; and the processing unit 502 is further configured to:
  • the first convolution module includes a convolution layer, a BN layer, a Relu layer and a pooling layer;
  • the size of the convolution kernel is larger than the size of the convolution sum in the N convolution modules;
  • the first convolution module includes multiple consecutive convolution layers, a BN layer, a Relu layer, and a pooling layer; the size of the convolution kernel of the first convolution module and the N convolution layers The size of the largest convolution kernel in the module is equal.
  • An embodiment of the present invention provides a computing device, including at least one processing unit and at least one storage unit, wherein the storage unit stores a computer program, and when the program is executed by the processing unit, the processing unit Perform the steps of the method for detecting the breast.
  • FIG. 6 it is a schematic diagram of the hardware structure of a computing device according to an embodiment of the present invention.
  • the computing device may specifically be a desktop computer, a portable computer, a smart phone, or a tablet computer.
  • the computing device may include a memory 801, a processor 802, and a computer program stored on the memory.
  • the memory 801 may include a read-only memory (ROM) and a random access memory (RAM), and provide the processor 802 with program instructions and data stored in the memory 801.
  • ROM read-only memory
  • RAM random access memory
  • the computing device described in the embodiments of the present application may further include an input device 803 and an output device 804.
  • the input device 803 may include a keyboard, a mouse, a touch screen, etc .
  • the output device 804 may include a display device, such as a liquid crystal display (Liquid Crystal Display, LCD), a cathode ray tube (Cathode Ray Tube, CRT), a touch screen, and the like.
  • the memory 801, the processor 802, the input device 803, and the output device 804 may be connected through a bus or in other ways. In FIG. 6, connection through a bus is used as an example.
  • the processor 802 calls the program instructions stored in the memory 801 and executes the method for detecting breasts provided in the foregoing embodiments according to the obtained program instructions.
  • An embodiment of the present invention also provides a computer-readable storage medium that stores a computer program that can be executed by a computing device, and when the program runs on the computing device, causes the computing device to perform the steps of a method for detecting breasts.
  • the embodiments of the present invention may be provided as methods or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种乳腺影像识别的方法及装置,涉及机器学习技术领域,该方法包括:获取乳腺影像(201);根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI及所述乳腺的腺体分型(202);根据所述ROI,确定所述ROI的乳腺病灶征象(203);根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级(204)。

Description

一种乳腺影像识别的方法及装置
本申请要求在2018年10月16日提交中国专利局、申请号为201811202692.2、申请名称为“一种乳腺影像识别的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及机器学习技术领域,尤其涉及一种乳腺影像识别的方法及装置。
背景技术
目前,乳腺影像可以利用低剂量的X光检查人类的乳房,它能侦测各种乳房肿瘤、囊肿等乳腺病灶,有助于早期发现乳癌,并降低其死亡率。乳腺影像是一种有效的检测方法,可以用于诊断多种女性乳腺相关的疾病。当然,其中最主要的使用还是在乳腺癌,尤其是早期乳腺癌的筛查上。因此若能有效的检测出乳腺影像上各种乳腺癌早期表现,对医生的帮助是巨大的。
当患者拍摄乳腺影像之后,医生通过个人经验对乳腺影像进行诊断,该方法效率较低,并且存在较大的主观性。
发明内容
本发明实施例提供一种乳腺影像识别的方法及装置,用于解决现有技术中通过医生经验判断乳腺影像的效率低,识别主观性较大,难以获得准确结果的问题。
本发明实施例提供了一种乳腺影像识别的方法,包括:
获取乳腺影像;
根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI及所述乳腺的腺体分型;
根据所述ROI,确定所述ROI的乳腺病灶征象;
根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级。
一种可能的实现方式,所述根据所述ROI确定所述ROI的乳腺病灶征象,包括:
根据第一特征提取模块,确定所述ROI的特征图像;所述第一特征提取模块包括K个卷积模块;所述K个卷积模块的每个卷积模块中依次包括第一卷积层、第二卷积层;所述第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所述第二卷积层输出的特征图像的个数大于所述第一卷积层输出的特征图像的个数;K大于0;
将所述ROI的特征图像输入至分类模块,确定所述ROI的乳腺病灶征象的置信度。
一种可能的实现方式,所述根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级,包括:
将所述ROI的乳腺病灶征象的置信度以及所述乳腺的腺体分型结果,输入至多个分类器中,所述多个分类器用于确定所述乳腺影像的分级中每个级的2分类的置信度;
根据所述多个分类器的分类结果,确定所述乳腺影像的分级。
一种可能的实现方式,所述根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI,包括:
根据所述乳腺影像,确定乳腺影像中乳腺病灶的坐标;
以所述乳腺病灶的坐标为中心,向周围扩展第一预设距离,确定包含所述乳腺病灶的识别框,所述预设距离为所述乳腺病灶的半径的预设倍数;
若确定所述乳腺病灶的半径大于第二预设距离,则将第一预设距离扩大预设倍数;所述第二预设距离小于或等于所述第一预设距离。
一种可能的实现方式,所述第一特征提取模块还包括下采样模块;所述下采样模块包括所述第一卷积层、所述第二卷积层、池化层和第三卷积层; 所述根据第一特征提取模块,确定所述ROI的特征图像,包括:
将所述第一特征提取模块输出的特征图像依次通过所述第一卷积层和所述第二卷积层和池化层,获得第一特征图像;
将所述第一特征提取模块输出的特征图像通过第三卷积层,获得第二特征图像;
将所述第一特征图像和所述第二特征图像,确定为所述下采样模块输出的特征图像。
一种可能的实现方式,所述第一特征提取模块还包括第一卷积模块,所述第一卷积模块位于所述K个卷积模块之前;所述将所述乳腺影像输入至所述第一特征提取模块中,包括:
将所述乳腺影像输入至所述第一卷积模块中,所述第一卷积模块包括一个卷积层,一个BN层,一个Relu层和一个池化层;所述第一卷积模块的卷积核大小大于所述N个卷积模块中的卷积和的大小;
或者,所述第一卷积模块包括连续的多个卷积层,一个BN层,一个Relu层和一个池化层;所述第一卷积模块的卷积核大小与所述N个卷积模块中的最大的卷积核的大小相等。
本发明实施例提供了一种乳腺影像识别的装置,包括:
获取单元,用于获取乳腺影像;
处理单元,用于根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI及所述乳腺的腺体分型;根据所述ROI,确定所述ROI的乳腺病灶征象;根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级。
一种可能的实现方式,所述处理单元,具体用于:
根据第一特征提取模块,对已标记乳腺病灶区域的乳腺病灶进行训练后确定所述ROI的特征图像;所述特征提取模块包括N个卷积模块;所述N个卷积模块的每个卷积模块中依次包括第一卷积层、第二卷积层;所述第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所 述第二卷积层输出的特征图像的个数大于所述第一卷积层输出的特征图像的个数;N大于0;将所述ROI的特征图像输入至分类模块,确定所述ROI的乳腺病灶征象的置信度。
另一方面,本发明实施例提供了一种计算设备,包括至少一个处理单元以及至少一个存储单元,其中,所述存储单元存储有计算机程序,当所述程序被所述处理单元执行时,使得所述处理单元执行上述任一项所述方法的步骤。
又一方面,本发明实施例提供了一种计算机可读存储介质,其存储有可由计算设备执行的计算机程序,当所述程序在所述计算设备上运行时,使得所述计算设备执行上述任一项所述方法的步骤。
本发明实施例中,由于提取乳腺影像的特征图像,并识别每一个特征图像中的乳腺,可以快速识别乳腺的腺体分型、乳腺病灶及乳腺病灶征象等,提高了乳腺分级的准确率。另外,通过在卷积神经网络模型中,设置第一卷积层输出的通道数减少,且第二卷积层输出的通道数增加至第一卷积层输入的通道数,使得卷积过程中,有效的保留了图像中的有效信息,在减少参数量的同时,提高了特征图像的提取的有效性,进而提高了检测乳腺影像中乳腺分级的准确性。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1a为本发明实施例提供的一种乳腺影像的示意图;
图1b为本发明实施例提供的一种乳腺影像的示意图;
图1c为本发明实施例提供的一种乳腺影像的示意图;
图1d为本发明实施例提供的一种乳腺影像的示意图;
图2为本发明实施例提供的一种乳腺影像乳腺病灶识别的方法的流程示意图;
图3为本发明实施例提供的一种乳腺影像征象识别的流程示意图;
图4为本发明实施例提供的一种乳腺影像识别的流程示意图;
图5为本发明实施例提供的一种乳腺影像识别的装置的结构示意图;
图6为本发明实施例提供的一种计算设备的结构示意图。
具体实施方式
为了使本发明的目的、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明实施例中,以乳腺X射线影像为例,进行示例性的描述,其他影像在此不再赘述。乳腺是利用低剂量(约为0.7毫西弗)的X光检查人类(主要是女性)的乳房,它能侦测各种乳房肿瘤、囊肿等乳腺病灶,有助于早期发现乳癌,并降低其死亡率。有一些国家提倡年长(一般为45周岁以上)的女性定期(间隔从一年到五年不等)进行乳腺摄影,以筛检出早期的乳腺癌。乳腺影像一般包含四份X光摄像,分别为2侧乳房的2种投照位(头尾位CC,内外侧斜位MLO)的四份乳腺影像,如图1a、1b、1c、1d所示。
一般来说,乳腺筛查的目的是预防乳腺癌,因此当发现乳腺病灶时医生往往希望能够对其乳腺癌恶性可能进行诊断。一般来说影像上对乳腺病灶的诊断往往基于乳腺病灶征象的检测。乳腺病灶征象一般分为钙化、肿块/不对称、以及结构扭曲。对于同一乳腺病灶,这些征象可能同时存在。
现有方法一般分为两类,一类是通过一些图形学的方法,尝试从影像中通过一些基础特征,提取出相关的钙化、肿块等乳腺病灶征象。方法简单,但是同时很难获得乳腺病灶的语义信息,从而使提取的准确性较差,容易受各种良性相似征象干扰。鲁棒性也较差。另一种方式是通过一些非监督的方法,尝试使用机器从图像中提取乳腺病灶的一些特征,但是这些特征缺乏实 际的语义信息,医生很难根据这些信息进行判别诊断,这种征象医学价值不大。
另外,现有技术往往只检测钙化或者肿块这样单独类型的乳腺病灶,不能同时对多种乳腺病灶同时进行检出,应用范围狭窄。同时针对钙化这些乳腺病灶,使用的是基于图像初级特征方法,这类方法比较简单,同时检测的准确性也比较差。
基于上述问题,本发明实施例提供一种乳腺影像识别的方法,如图2所示,包括:
步骤201:获取乳腺影像;
步骤202:根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI及所述乳腺的腺体分型;
步骤203:根据所述ROI,确定所述ROI的乳腺病灶征象;
步骤204:根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级。
其中,步骤202中,根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI之前,本发明实施例中提供一种乳腺影像识别的方法,如图3所示,包括以下步骤:
步骤301:获取乳腺影像;
步骤302:将所述乳腺影像输入至第二特征提取模块中,获取所述乳腺影像不同尺寸的特征图像;
其中,所述第二特征提取模块包括N个卷积模块;所述N个卷积模块为下采样卷积块和/或上采样卷积块;每个下采样卷积块或上采样卷积块提取的特征图像的尺寸均不同,所述N个卷积模块的每个卷积模块中包括第一卷积层、第二卷积层;所述第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所述第二卷积层输出的特征图像的个数大于所述第一卷积层输出的特征图像的个数;N大于0;
举例来说,该第二特征提取模块可以包括三个下采样卷积块。每个卷积 模块可以包括第一卷积层和第二卷积层,第一卷积层包括卷积层,与卷积层连接的归一化(Batch Normalization,BN)层、与BN层连接的激活函数层。
为增加第二特征提取模块的深度,一种可能的实现方式,特征图像经过卷积模块的步骤可以包括:
步骤一:将所述卷积模块输入的特征图像输入至所述第一卷积层获得第一特征图像;第一卷积层的卷积核可以为N1*m*m*N2;N1为所述卷积模块输入的特征图像的通道数,N2为第一特征图像的通道数;N1>N2。
步骤二:将第一特征图像输入至所述第二卷积层获得第二特征图像;第一卷积层的卷积核可以为N2*m*m*N3;N3为第二特征图像的通道数;N3>N2;
步骤三:将所述卷积模块输入的特征图像和所述第二特征图像合并后,确定为所述卷积模块输出的特征图像。
在一种具体的实施例中,第二卷积层输出的特征图像的个数可以与第一卷积层输入的特征图像的个数相等。即,N1=N2。
上文所描述的乳腺影像对应的特征图像的确定方式仅为一种可能的实现方式,在其它可能的实现方式中,也可以通过其它方式确定乳腺影像对应的特征图像,具体不做限定。
需要说明的是:本发明实施例中的激活函数可以为多种类型的激活函数,比如,可以为线性整流函数(Rectified Linear Unit,ReLU),具体不做限定;
由于本发明实施例中输入的图像为二维图像,因此,本发明实施例中的第二特征提取模块可以为(2Dimensions,2D)卷积神经网络中的特征提取模块,相应地,第一卷积层的卷积核大小可以为m*m、第二卷积层的卷积核大小可以为n*n;m和n可以相同也可以不同,在此不做限定;其中,m,n为大于或等于1的整数。第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所述第二卷积层输出的特征图像的个数大于所述第一卷积层输出的特征图像的个数。
进一步的,为优化第二特征提取模块,一种可能的实现方式,所述第一卷积层和所述第二卷积层之间还包括第三卷积层;所述第三卷积层输入的特 征图像为所述第一卷积层输出的图像,所述第三卷积层输出的特征图像为所述第二卷积层输入的图像。
其中,第三卷积层的卷积核大小可以为k*k,k与m,n可以相同,也可以不同,在此不做限定。
一个具体的实施例中,所述第一卷积层的卷积核的大小为3*3;所述第二卷积层的卷积核的大小为3*3;所述第三卷积层的卷积核的大小为1*1。
通过上述卷积核的设置方式,可以有效的提高特征提取的感知野,有利于提高乳腺影像识别的准确度。
不同尺寸的特征图像可以为不同像素的特征图像,比如像素为500×500的特征图像与像素为1000×1000的特征图像为不同尺寸的特征图像。
可选地,采用预先训练好的乳腺病灶检测模型提取乳腺影像的不同尺寸的特征图像,模型是采用2D卷积神经网络对已标记的多个乳腺影像进行训练后确定的。
可选地,在提取乳腺影像的不同尺寸的特征图像之前,将图像缩放到特定尺寸,使各方向上像素与实际长度的比例尺一定。
另一种可能的实现方式,所述第二特征提取模块包括N/2个下采样卷积块和N/2个上采样卷积块;所述获取所述乳腺影像的不同尺寸的特征图像,包括:
将所述乳腺影像依次通过N/2个下采样卷积块提取N/2个所述乳腺影像的第一特征图像;
将第N/2个下采样卷积块输出的第一特征图像依次通过N/2个上采样卷积块提取N/2个所述乳腺影像的第二特征图像,每个上采样卷积块提取的第二特征图像的尺寸均不同;
将尺寸相同的第一特征图像和第二特征图像合并后,确定N个所述乳腺影像的不同尺寸的特征图像。
为提高特征提取的感知野,提高特征提取的性能,一种可能的实现方式,所述第二特征提取模块之前还包括特征预处理模块;所述特征预处理模块包 括一个卷积层,一个BN层,一个Relu层和一个池化层;所述特征预处理模块的卷积核大小大于所述N个卷积模块中任一卷积模块的卷积核的大小。
优选的,所述卷积层的卷积核大小可以为5*5,间隔为2个像素。池化层为2*2的最大值池化。通过特征预处理模块,可以将图像面积迅速缩小,边长变为原有1/4,有效的提高特征图像的感知野,快速的提取浅层特征,有效的减少原始信息的损失。
一种可能的实现方式,所述特征预处理模块包括连续的多个卷积层,一个BN层,一个Relu层和一个池化层;所述特征预处理模块的卷积核大小与所述N个卷积模块中的最大的卷积核的大小相等。
特征图像经过特征预处理模块的步骤可以包括:将所述乳腺影像输入至特征预处理模块,获得预处理的特征图像;将所述预处理的特征图像作为所述第二特征提取模块的输入。
步骤303:针对所述乳腺影像的不同尺寸的特征图像中的任意一个特征图像,从所述特征图像中确定出乳腺病灶识别框。
可选地,采用预先训练好的乳腺病灶检测模型从特征图像中确定出乳腺病灶识别框,乳腺病灶检测模型是采用2D卷积神经网络对已标记乳腺病灶的多个乳腺影像进行训练后确定的。从特征图像中确定出的乳腺病灶识别框框选的区域并不一定都包含乳腺病灶,故需要根据乳腺病灶识别框的乳腺病灶概率对各乳腺病灶识别框进行筛选,将乳腺病灶概率小于预设阈值的乳腺病灶识别框删除,其中,乳腺病灶概率为乳腺病灶识别框框选的区域为乳腺病灶的概率。
步骤304:根据从各特征图像中确定出的乳腺病灶识别框,确定乳腺影像的乳腺病灶。
具体的,确定出乳腺病灶识别框之后,将识别框作为乳腺影像中的乳腺病灶输出,输出的乳腺病灶参数包括乳腺病灶的中心坐标以及乳腺病灶的直径,其中乳腺病灶的中心坐标为乳腺病灶识别框的中心坐标,乳腺病灶的直径为乳腺病灶识别框的中心至其中一个面的距离。
由于提取乳腺影像的不同尺寸的特征图像,并识别每一个特征图像中的乳腺病灶,故既能检测到大尺寸的乳腺病灶,同时也能检测到小尺寸的乳腺病灶,提高了乳腺病灶检测的精度。其次,相较于人工判断乳腺影像中是否存在乳腺病灶的方法,本申请中自动检测乳腺病灶的方法有效地提高了乳腺病灶检测效率。
由于从各个特征图像中确定出的乳腺病灶识别框可能存在多个识别框对应一个乳腺病灶,若直接根据乳腺病灶识别框的数量确定乳腺影像中乳腺病灶的数量,将导致检测得到的乳腺病灶数量存在很大偏差,故需要将各特征图像转化为同一尺寸的特征图像并对齐,然后将从各特征图像中确定出的乳腺病灶识别框进行筛选,并将筛选后的乳腺病灶识别框确定为乳腺影像中的乳腺病灶。
为进一步提高乳腺病灶的识别准确率,一种可能的实现方式,所述乳腺影像包括不同侧乳房的不同投照位的乳腺影像;所述将所述乳腺影像输入至第二特征提取模块,包括:
将所述乳腺影像的同一投照位的另一侧乳房的乳腺影像作为所述乳腺影像的参考影像,输入至所述第二特征提取模块,获得参考特征图像;
所述针对所述乳腺影像的不同尺寸的特征图像中的任意一个特征图像,从所述特征图像中确定出乳腺病灶识别框;包括:
确定所述特征图像中的第一乳腺病灶识别框和所述参考特征图像中的第二乳腺病灶识别框;
若确定所述第一乳腺病灶识别框和所述第二乳腺病灶识别框的位置和/或大小都相同,则删除所述第一乳腺病灶识别框。
下面具体介绍一下通过卷积神经网络对已标记乳腺病灶的多个乳腺影像进行训练确定乳腺病灶检测模型过程,包括以下步骤:
步骤一,获取乳腺影像作为训练样本。
具体地,可以将获取的多幅乳腺影像直接作为训练样本,也可以对获取的多幅乳腺影像进行增强操作,扩大训练样本的数据量,增强操作包括但不 限于:随机上下左右平移设定像素(比如0~20像素)、随机旋转设定角度(比如-15~15度)、随机缩放设定倍数(比如0.85~1.15倍)。
步骤二,人工标记训练样本中的乳腺病灶。
可以通过医生等专业人员对训练样本中的乳腺病灶进行标记,标记的内容包括乳腺病灶的中心坐标以及乳腺病灶的直径。具体地,可以由多名医生对乳腺病灶进行标注,并通过多人投票合成的方式确定最终的乳腺病灶以及乳腺病灶参数,结果用掩码图的方式保存。需要说明的是,人工标记训练样本中乳腺病灶与训练样本的增强操作不分先后,可以先人工标记训练样本中的乳腺病灶,然后再将标记乳腺病灶的训练样本进行增强操作,也可以先将训练样本进行增强操作,然后人工对增强操作后的训练样本进行标记。
步骤三,将训练样本输入至第二特征提取模块对应的卷积神经网络中进行训练,确定乳腺病灶检测模型。
该卷积神经网络的结构包括输入层、下采样卷积块、上采样卷积块、目标检测网络以及输出层。将训练样本进行预处理后输入上述卷积神经网络,将输出的乳腺病灶与预先标记的训练样本的掩码图进行损失函数计算,然后采用反向传播算法以及sgd优化算法反复迭代,确定乳腺病灶检测模型。
进一步地,采用上述训练确定的乳腺病灶检测模型提取乳腺影像的不同尺寸的特征图像的过程,包括以下步骤:
步骤一,将乳腺影像依次通过N/2个下采样卷积块提取N个乳腺影像的第一特征图像。
每个下采样卷积块提取的第一特征图像的尺寸均不同,N/2大于0。
可选地,下采样卷积块包括第一卷积层和第二卷积层、组连接层、前后连接层、下采样层。
步骤二,将第N/2个下采样卷积块输出的第一特征图像依次通过N/2个上采样卷积块提取N/2个乳腺影像的第二特征图像。
每个上采样卷积块提取的第二特征图像的尺寸均不同。
可选地,上采样卷积块包括卷积层、组连接层、前后连接层、上采样层 以及合成连接层。卷积层包括卷积运算,batch normalization层和RELU层。
步骤三,将尺寸相同的第一特征图像和第二特征图像合并后,确定N/2个乳腺影像的不同尺寸的特征图像。
通过上采样卷积块中的合成连接层将尺寸相同的第一特征图像和第二特征图像合并确定不同尺寸的特征图像。可选地,在合并时,是将第一特征图像和第二特征图像的通道数进行合并,合并后得到的特征图像的尺寸与第一特征图像和第二特征图像的尺寸相同。
进一步地,采用上述训练确定的乳腺病灶检测模型从特征图像中确定出乳腺病灶识别框的过程,包括以下步骤:
步骤一,针对特征图像中任意一个像素,以像素为中心,向四周扩散确定第一区域。
步骤二,在第一区域中根据预设规则设置多个预设框。
由于乳腺病灶的形状不一,故可以将预设框设置为多种形状。预设规则可以是将预设框中心与第一区域的中心重合,也可以是预设框的角与第一区域的角重合等等。
在一个具体的实施例中,乳腺病灶预设框选取的方式为,对于每个特征图的每个像素,认为其为一个锚点。在每个锚点上设置多个长宽比不一的预设框。
对于每个预设框,通过对特征图进行卷积,预测一个坐标和尺寸的偏移,以及置信度,根据坐标和尺寸的偏移,以及置信度,确定预设框。
步骤三,针对任意一个预设框,预测预设框与第一区域的位置偏差。
步骤四,根据位置偏差调整预设框后确定乳腺病灶识别框,并预测乳腺病灶识别框的乳腺病灶概率。
其中,乳腺病灶概率为乳腺病灶识别框框选的区域为乳腺病灶的概率。通过预测预设框与第一区域的位置偏差,然后采用位置偏差调整预设框确定识别框,以使识别框更多地框选特征图中的乳腺病灶区域,提高乳腺病灶检测的准确性。
具体的训练过程可以包括:将训练数据影像输入上述的卷积神经网络进行计算。传入时,将乳腺病灶不同窗宽窗位的多张影像传入。训练时,在网络输出的预测框中,选取置信度最高的预测框集和与训练样本重合最大的预测框集合。将预测框置信度和样本标注的交叉熵,与训练样本的标注乳腺病灶和预测框的偏移的交叉熵,两者的加权和作为loss函数。通过反向传播的方法训练,训练的优化算法使用带有动量和阶梯衰减的sgd算法。
在算法使用过程中,通过预处理模块,将输入图像预处理,以提高特征提取的效果。
一种可能的实现方式,所述获取乳腺影像,包括:
步骤一、将拍摄的乳腺影像图像,根据高斯滤波,确定所述乳腺影像图像的二值化图像;
步骤二、获取所述二值化图像的连通区域,将连通区域中最大的区域对应于所述乳腺影像图像的区域作为分割出的乳腺图像;
步骤三、将所述分割出的乳腺图像添加至预设的图像模板中,生成预处理后的乳腺图像;并将所述预处理后的乳腺图像作为输入至所述第二特征提取模块的乳腺影像。
具体的,预处理模块的输入为以Dicom格式形式保存的乳腺影像。预处理可以包括腺体分割和图像归一化;腺体分割的主要目的是将输入的乳腺影像中的乳腺部分提取出,剔除其他无关的干扰的图像;图像归一化是将图像化归为统一格式图像,具体的,包括:
在步骤一中,具体的二值化的阈值可以通过求图像灰度直方图的最大类间距方法获得。
在步骤二中,可以将二值化的结果,通过漫水法(flood fill)获得独立的区域块,并统计每个区域块的面积;将面积最大的区域块对应的图像上的区域,作为分割出来的乳腺图像。
在步骤三中,预设的图像模板可以为黑色底板的正方形图像;具体的,可以将获得的分割出来的乳腺图像,通过加黑边填充的方式扩充为1:1的正方 形图像。
另外,输出的乳腺影像可以通过像素缩放,例如,可以将图像差值缩放到4096像素×4096像素大小。
针对乳腺,由于乳腺照射剂量以及拍摄的外界因素等原因,可以通过调整乳腺的窗宽窗位,以获得更好的乳腺影像识别的识别效果。一种可能的实现方式,所述将所述乳腺影像输入至第二特征提取模块之前,还包括:
获取所述乳腺影像的原始文件;
在所述乳腺影像的原始文件中选取至少一组窗宽窗位,并获取所述至少一组窗宽窗位对应的图片格式的乳腺影像;
根据所述至少一组窗宽窗位对应的图片格式的乳腺影像,作为输入至所述第二特征提取模块的乳腺影像。
在一个具体实施例中,可以通过三组窗宽窗位,将dicom图像转换为png图像,例如,第一组窗宽为4000,窗位2000;第二组窗宽为1000;窗位为2000;第三组窗宽为1500,窗位为1500。
本发明实施例提供的一种乳腺影像征象识别的方法,该流程的具体步骤包括:
步骤一,获取乳腺影像中乳腺病灶的坐标。
乳腺影像为二维图像,乳腺病灶的二维坐标可以为乳腺病灶内的点的二维坐标(比如乳腺病灶中心点的二维坐标),也可以是乳腺病灶表面的点的二维坐标。乳腺病灶包括但不限于乳腺病灶。
步骤二,根据乳腺病灶的坐标从乳腺影像中确定包含乳腺病灶的感兴趣区域ROI。
具体地,以乳腺病灶的二维坐标为中心,向周围扩展预设距离,确定包含乳腺病灶的识别框,预设距离为乳腺病灶的半径的预设倍数,比如乳腺病灶半径的1.25倍。然后截取此识别框,并插值缩放到一定的大小。
一种可能的实现方式,可以对识别框中每一个像素附加一个空间信息通道,输出感兴趣区域ROI,空间信息通道为像素与乳腺病灶的二维坐标之间 的距离。
一种可能的实现方式,若确定所述乳腺病灶的半径大于第二预设距离,则将第一预设距离扩大预设倍数;所述第二预设距离小于或等于所述第一预设距离。
例如,第一预设距离为768*768大小影像;根据乳腺病灶坐标,切取768*768大小影像作为ROI。第二预设距离可以为640*640;如果乳腺病灶大小超过640*640,则将ROI调整为尺寸×1.2倍;再缩放至768*768大小影像。
步骤三,根据ROI以及乳腺病灶检测模型从乳腺影像中分割出乳腺病灶区域。
乳腺病灶检测模型是采用卷积神经网络对已标记乳腺病灶区域的多幅乳腺影像进行训练后确定的。
在一种可能的实施方式中,可以直接将乳腺影像输入乳腺病灶检测模型,通过乳腺病灶检测模型输出乳腺病灶区域。
在另一种可能的实施方式中,可以将乳腺影像中的ROI输入乳腺病灶检测模型,通过乳腺病灶检测模型输出乳腺病灶区域。具体地,ROI的大小可以根据实际情况进行设定,由于根据乳腺病灶的二维坐标从乳腺影像中确定包含乳腺病灶的感兴趣区域ROI,故缩小了检测乳腺病灶的区域,相较于将整张乳腺影像输入乳腺病灶检测模型确定乳腺病灶区域的方法,将ROI输入乳腺病灶检测模型确定乳腺病灶区域能有效提高乳腺病灶区域的检测精度和检测效率。
本发明实施例提供的一种乳腺影像征象的识别方法的流程,该流程可以由乳腺影像征象识别的装置执行,如图4所示,该流程的具体步骤包括:
步骤401,获取乳腺影像以及所述乳腺影像中乳腺病灶的坐标;
步骤402,根据所述乳腺病灶的坐标从所述乳腺影像中确定包含所述乳腺病灶的感兴趣区域ROI;
步骤403:将所述ROI输入至第一特征提取模块中,确定出乳腺病灶征象的特征图像;
其中,所述第一特征提取模块包括K个卷积模块;所述N个卷积模块的每个卷积模块中依次包括第一卷积层和第二卷积层;所述第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所述第二卷积层输出的特征图像的个数大于或等于所述第一卷积层输入的特征图像的个数;K为正整数;
步骤404:将所述第一特征提取模块输出的特征图像输入至分类模块中,确定所述乳腺病灶的征象。
本发明实施例采用的第一特征提取模块,是通过对大量数据进行训练得到的,从而使得通过模型得到的结果较为合理,且具有一定的科学依据。相比于传统的医生诊断的方式而言,能够降低因医生水平差异导致的诊断误差率,从而提高确定乳腺病灶征象的准确性;进一步地,由于提取乳腺影像中每个ROI的特征图像,可以快速识别乳腺病灶的征象,提高了乳腺病灶征象识别的效率。另外,通过在第一特征提取模块中,设置第一卷积层输出的通道数减少,且第二卷积层输出的通道数增加,使得卷积过程中,有效的保留了图像中的有效信息,在减少参数量的同时,提高了特征图像的提取的有效性,进而提高了检测乳腺病灶征象的准确性。
第一特征提取模块的参数可以是通过对多个患者的乳腺图像进行训练得到的。其中,第一特征提取模块可以为浅层特征提取模块,也可以为深层特征提取模块,即该特征提取神经网络可以包括K个卷积模块,且K小于或等于第一阈值。本领域技术人员可以根据经验和实际情况来设定第一阈值的具体数值,此处不做限定。
为了根据更加清楚地描述上文所涉及的第一特征提取模块,该第一特征提取模块可以包括三个卷积模块。每个卷积模块可以包括第一卷积层和第二卷积层,第一卷积层包括卷积层,与卷积层连接的归一化(Batch Normalization,BN)层、与BN层连接的激活函数层。
为增加第一特征提取模块的深度,一种可能的实现方式,特征图像经过卷积模块的步骤可以包括:
步骤一:将所述卷积模块输入的特征图像输入至所述第一卷积层获得第一特征图像;第一卷积层的卷积核可以为N1*m*m*N2;N1为所述卷积模块输入的特征图像的通道数,N2为第一特征图像的通道数;N1>N2。
步骤二:将第一特征图像输入至所述第二卷积层获得第二特征图像;第一卷积层的卷积核可以为N2*m*m*N3;N3为第二特征图像的通道数;N3>N2;
步骤三:将所述卷积模块输入的特征图像和所述第二特征图像合并后,确定为所述卷积模块输出的特征图像。
一种可能的实现方式,N1=N2。
上文所描述的乳腺影像对应的特征图像的确定方式仅为一种可能的实现方式,在其它可能的实现方式中,也可以通过其它方式确定乳腺影像对应的特征图像,具体不做限定。
需要说明的是:本发明实施例中的激活函数可以为多种类型的激活函数,比如,可以为线性整流函数(Rectified Linear Unit,ReLU),具体不做限定;
由于本发明实施例中输入的图像为二维图像,因此,本发明实施例中的第一特征提取模块可以为(2Dimensions,2D)卷积神经网络中的第一特征提取模块,相应地,第一卷积层的卷积核大小可以为m*m、第二卷积层的卷积核大小可以为n*n;m和n可以相同也可以不同,在此不做限定;其中,m,n为大于或等于1的整数。第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所述第二卷积层输出的特征图像的个数大于或等于所述第一卷积层输入的特征图像的个数。
进一步的,为优化第一特征提取模块,一种可能的实现方式,所述第一卷积层和所述第二卷积层之间还包括第三卷积层;所述第三卷积层输入的特征图像为所述第一卷积层输出的图像,所述第三卷积层输出的特征图像为所述第二卷积层输入的图像。
其中,第三卷积层的卷积核大小可以为k*k,k与m,n可以相同,也可以不同,在此不做限定。
一个具体的实施例中,所述第一卷积层的卷积核的大小为3*3;所述第二 卷积层的卷积核的大小为3*3;所述第三卷积层的卷积核的大小为1*1。
通过上述卷积核的设置方式,可以有效的提高特征提取的感知野,有利于提高乳腺病灶征象的准确度。
为进一步提高第一特征提取模块的鲁棒性,一种可能的实现方式,所述第一特征提取模块中还包括L个下采样模块;所述L个下采样模块中的每个下采样模块包括所述第一卷积层、所述第二卷积层、池化层和第四卷积层;特征图像经过下采样模块的步骤可以包括:
步骤一:将所述下采样模块的特征图像依次输入至所述第一卷积层和所述第二卷积层和池化层获得第一特征图像;
在一个具体的实施例中,可以将输入特征图像依次通过第一卷积层和第四卷积层,输出的特征图像的通道数减少,再从通过一个第二卷积层将特征图像增大回原有特征图的通道数。将第二卷积层输出的特征图像,输入至池化层,通过2*2的平均池化将特征图像的像素尺寸缩小到输入的一半,获得第一特征图像。
步骤二:将所述下采样模块的特征图像输入至第四卷积层,获得第二特征图像;
具体的,所述第四卷积层的卷积步长设为2,第二特征图像的像素尺寸为输入的特征图像的像素尺寸一半;卷积核大小可以与第一卷积层大小相同,也可以不同,在此不做限定。
步骤三:将所述第一特征图像和所述第二特征图像合并后,确定为所述下采样模块输出的特征图像。
为提高特征提取的感知野,提高特征提取的性能,一种可能的实现方式,所述第一特征提取模块之前还包括特征预处理模块;所述特征预处理模块包括一个卷积层,一个BN层,一个Relu层和一个池化层;所述特征预处理模块的卷积核大小大于所述N个卷积模块中任一卷积模块的卷积核的大小。特征图像经过特征预处理模块的步骤可以包括:将所述乳腺影像输入至特征预处理模块,获得预处理的特征图像;将所述预处理的特征图像作为所述第一 特征提取模块的输入。
优选的,所述卷积层的卷积核大小可以为5*5,间隔为2个像素。池化层为2*2的最大值池化。通过特征预处理模块,可以将图像面积迅速缩小,边长变为原有1/4,有效的提高特征图像的感知野。
本发明实施例提供的一种分类模块的结构,该分类模块包括平均池化层,dropout层,全连接层、和softmax层。待确诊患者对应的特征向量可以依次通过平均池化层,dropout层,全连接层进行计算后,再由softmax层进行分类后输出分类结果,从而得到患者的乳腺病灶征象。
具体的,首先通过全局平均池化层,将特征图提取成一个特征向量。再将特征向量通过一层dropout,全连接层和softmax层,获得一个二维的分类置信度向量(包括:钙化、肿块/不对称、以及结构扭曲)。每一位表示为此类型的置信度,且所有置信度的和为1。输出置信度最高的位,此位所代表的类型即为算法预测的乳腺征象。
需要说明的是,本发明实施例提供的分类模块仅为一种可能的结构,在其它示例中,本领域技术人员可以对发明实施例提供分类模块的内容进行修改,比如,分类模块可以包括2个全连接层,具体不做限定。
本发明实施例中,第一特征提取模块和分类模块可以作为一个神经网络分类模型进行训练,在训练神经网络分类模型的过程中,可以将多个患者对应的特征向量输入到初始的神经网络分类模型中,得到每个乳腺影像对应的预测腺体分型,并根据所述标注后的乳腺影像的乳腺病灶征象结果,进行反向训练,生成所述神经网络分类模型。
下面具体介绍一下通过神经网络分类模型训练确定乳腺病灶征象识别模型过程,包括以下步骤:
步骤一,获取乳腺影像作为训练样本。
具体地,可以将获取的多幅乳腺影像直接作为训练样本,也可以对获取的多幅乳腺影像进行增强操作,扩大训练样本的数据量,增强操作包括但不限于:随机上下左右平移设定像素(比如0~20像素)、随机旋转设定角度(比 如-15~15度)、随机缩放设定倍数(比如0.85~1.15倍)。
步骤二,人工标记训练样本中乳腺病灶区域中的征象。
可以通过医生等专业人员对训练样本进行标记。具体地,可以由多名医生对乳腺病灶区域的征象进行标注,并通过多人投票合成的方式确定最终的乳腺病灶区域,结果用掩码图的方式保存。需要说明的是,人工标记训练样本中乳腺病灶区域的征象与训练样本的增强操作不分先后,可以先人工标记训练样本中的乳腺病灶区域,然后再将标记乳腺病灶区域的征象的训练样本进行增强操作,也可以先将训练样本进行增强操作,然后人工对增强操作后的训练样本进行标记。
步骤三,将训练样本输入卷积神经网络进行训练,确定乳腺病灶征象识别模型。
在一种可能的实施方式中,可以直接将已标记乳腺病灶区域的征象的乳腺影像作为训练样本输入卷积神经网络进行训练,确定乳腺病灶征象识别模型。
在另一种可能的实施方式中,可以对已标记乳腺病灶区域的乳腺影像进行处理后作为训练样本输入卷积神经网络进行训练,确定乳腺病灶征象识别模型,具体过程为:针对任意一个已标记乳腺病灶区域的乳腺影像,人工标记该乳腺影像中乳腺病灶的二维坐标,然后以乳腺病灶的二维坐标为中心,向周围扩展预设距离,确定包含乳腺病灶的识别框,预设距离为乳腺病灶的半径的预设倍数。对识别框中每一个像素附加一个空间信息通道,确定感兴趣区域ROI,空间信息通道为像素与乳腺病灶的二维坐标之间的距离。之后再将已标记乳腺病灶区域的ROI作为训练样本输入卷积神经网络进行训练,确定乳腺病灶征象识别模型。
通过增加距离信息,即像素与乳腺病灶的二维坐标之间的距离,可以根据乳腺病灶的区域进一步的提高乳腺病灶中的征象识别的准确度。
进一步地,采用上述训练确定的乳腺病灶征象识别模型确定乳腺影像中的乳腺病灶征象的过程,包括以下步骤:
步骤一,将所ROI依次通过K个第一特征提取块提取ROI的特征图像,K大于0。
步骤二,将ROI的特征图像通过全局平均池化层,将特征图提取成一个特征向量。再将特征向量通过一层dropout,全连接层和sigmoid层,获得一个二维的分类置信度向量。
步骤三,根据ROI的二维的分类置信度向量确定乳腺病灶的征象。
其中,获得的二维的分类置信度向量中,每一位表示为一个类型的置信度。对各个类型设定切选阈值,将置信度大于阈值的类别作为此乳腺病灶的征象。即,输出高于阈值的位,此位所代表的类型即为模型预测的乳腺病灶征象。
在步骤203中,一种可能的实现方式,包括:
步骤一、将所述ROI的乳腺病灶征象的置信度以及所述乳腺的腺体分型结果,输入至多个分类器中,所述多个分类器用于确定所述乳腺影像的分级中每个级的2分类的置信度;
步骤二、根据所述多个分类器的分类结果,确定所述乳腺影像的分级。
举例来说,乳腺分级通常可以包括0-6级,可以将分类器设置为5个分类器,每个分类器分别为一个二分类的分类器,例如,第一个分类器输出的类型为小于或等于0级的置信度,和大于0级的置信度;第二个分类器输出的类型为小于或等于1级的置信度,和大于1级的置信度;第三个分类器输出的类型为小于或等于2级的置信度,和大于2级的置信度;第四个分类器输出的类型为小于或等于3级的置信度,和大于3级的置信度;第5个分类器输出的类型为小于或等于4级的置信度,和大于4级的置信度;
根据上述5个分类器输出的置信度的结果,进行平均,输出高于阈值的位作为所述乳腺影像的分级的结果。
基于相同的技术构思,本发明实施例提供了一种乳腺影像识别的装置,如图5所示,该装置可以执行乳腺影像识别的方法的流程,该装置包括获取模块501和处理模块502。
获取单元501,用于获取乳腺影像;
处理单元502,用于根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI及所述乳腺的腺体分型;根据所述ROI,确定所述ROI的乳腺病灶征象;根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级。
一种可能的实现方式,所述处理单元502,具体用于:
根据第一特征提取模块,对已标记乳腺病灶区域的乳腺病灶进行训练后确定所述ROI的特征图像;所述特征提取模块包括N个卷积模块;所述N个卷积模块的每个卷积模块中依次包括第一卷积层、第二卷积层;所述第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所述第二卷积层输出的特征图像的个数大于所述第一卷积层输出的特征图像的个数;N大于0;将所述ROI的特征图像输入至分类模块,确定所述ROI的乳腺病灶征象的置信度。
一种可能的实现方式,所述处理单元502,具体用于:
将所述ROI的乳腺病灶征象的置信度以及所述乳腺的腺体分型结果,输入至多个分类器中,所述多个分类器用于确定所述乳腺影像的分级中每个级的2分类的置信度;
根据所述多个分类器的分类结果,确定所述乳腺影像的分级。
一种可能的实现方式,所述处理单元502具体用于:
根据所述乳腺影像,确定乳腺影像中乳腺病灶的坐标;
以所述乳腺病灶的坐标为中心,向周围扩展第一预设距离,确定包含所述乳腺病灶的识别框,所述预设距离为所述乳腺病灶的半径的预设倍数;
若确定所述乳腺病灶的半径大于第二预设距离,则将第一预设距离扩大预设倍数;所述第二预设距离小于或等于所述第一预设距离。
一种可能的实现方式,所述第一特征提取模块还包括下采样模块;所述下采样模块包括所述第一卷积层、所述第二卷积层、池化层和第三卷积层;所述处理单元502,具体用于:
将所述第一特征提取模块输出的特征图像依次通过所述第一卷积层和所述第二卷积层和池化层,获得第一特征图像;
将所述第一特征提取模块输出的特征图像通过第三卷积层,获得第二特征图像;
将所述第一特征图像和所述第二特征图像,确定为所述下采样模块输出的特征图像。
一种可能的实现方式,所第一特征提取模块还包括第一卷积模块,所述第一卷积模块位于所述K个卷积模块之前;所述处理单元502,还用于:
将所述乳腺影像输入至所述第一卷积模块中,所述第一卷积模块包括一个卷积层,一个BN层,一个Relu层和一个池化层;所述第一卷积模块的卷积核大小大于所述N个卷积模块中的卷积和的大小;
或者,所述第一卷积模块包括连续的多个卷积层,一个BN层,一个Relu层和一个池化层;所述第一卷积模块的卷积核大小与所述N个卷积模块中的最大的卷积核的大小相等。
本发明实施例提供了一种计算设备,包括至少一个处理单元以及至少一个存储单元,其中,所述存储单元存储有计算机程序,当所述程序被所述处理单元执行时,使得所述处理单元执行检测乳腺的方法的步骤。如图6所示,为本发明实施例中所述的计算设备的硬件结构示意图,该计算设备具体可以为台式计算机、便携式计算机、智能手机、平板电脑等。具体地,该计算设备可以包括存储器801、处理器802及存储在存储器上的计算机程序,所述处理器802执行所述程序时实现上述实施例中的任一检测乳腺的方法的步骤。其中,存储器801可以包括只读存储器(ROM)和随机存取存储器(RAM),并向处理器802提供存储器801中存储的程序指令和数据。
进一步地,本申请实施例中所述的计算设备还可以包括输入装置803以及输出装置804等。输入装置803可以包括键盘、鼠标、触摸屏等;输出装置804可以包括显示设备,如液晶显示器(Liquid Crystal Display,LCD)、阴极射线管(Cathode Ray Tube,CRT),触摸屏等。存储器801,处理器802、 输入装置803和输出装置804可以通过总线或者其他方式连接,图6中以通过总线连接为例。处理器802调用存储器801存储的程序指令并按照获得的程序指令执行上述实施例提供的检测乳腺的方法。
本发明实施例还提供了一种计算机可读存储介质,其存储有可由计算设备执行的计算机程序,当所述程序在计算设备上运行时,使得所述计算设备执行检测乳腺的方法的步骤。
本领域内的技术人员应明白,本发明的实施例可提供为方法、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (10)

  1. 一种乳腺影像识别的方法,其特征在于,包括:
    获取乳腺影像;
    根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI及所述乳腺的腺体分型;
    根据所述ROI,确定所述ROI的乳腺病灶征象;
    根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述ROI确定所述ROI的乳腺病灶征象,包括:
    根据第一特征提取模块,确定所述ROI的特征图像;所述第一特征提取模块包括K个卷积模块;所述K个卷积模块的每个卷积模块中依次包括第一卷积层、第二卷积层;所述第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所述第二卷积层输出的特征图像的个数大于所述第一卷积层输出的特征图像的个数;K大于0;
    将所述ROI的特征图像输入至分类模块,确定所述ROI的乳腺病灶征象的置信度。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级,包括:
    将所述ROI的乳腺病灶征象的置信度以及所述乳腺的腺体分型结果,输入至多个分类器中,所述多个分类器用于确定所述乳腺影像的分级中每个级的2分类的置信度;
    根据所述多个分类器的分类结果,确定所述乳腺影像的分级。
  4. 如权利要求2所述的方法,其特征在于,所述根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI,包括:
    根据所述乳腺影像,确定乳腺影像中乳腺病灶的坐标;
    以所述乳腺病灶的坐标为中心,向周围扩展第一预设距离,确定包含所述乳腺病灶的识别框,所述预设距离为所述乳腺病灶的半径的预设倍数;
    若确定所述乳腺病灶的半径大于第二预设距离,则将第一预设距离扩大预设倍数;所述第二预设距离小于或等于所述第一预设距离。
  5. 如权利要求2所述的方法,其特征在于,所述第一特征提取模块还包括下采样模块;所述下采样模块包括所述第一卷积层、所述第二卷积层、池化层和第三卷积层;所述根据第一特征提取模块,确定所述ROI的特征图像,包括:
    将所述第一特征提取模块输出的特征图像依次通过所述第一卷积层和所述第二卷积层和池化层,获得第一特征图像;
    将所述第一特征提取模块输出的特征图像通过第三卷积层,获得第二特征图像;
    将所述第一特征图像和所述第二特征图像,确定为所述下采样模块输出的特征图像。
  6. 如权利要求2所述的方法,其特征在于,所述第一特征提取模块还包括第一卷积模块,所述第一卷积模块位于所述K个卷积模块之前;所述将所述乳腺影像输入至所述第一特征提取模块中,包括:
    将所述乳腺影像输入至所述第一卷积模块中,所述第一卷积模块包括一个卷积层,一个BN层,一个Relu层和一个池化层;所述第一卷积模块的卷积核大小大于所述N个卷积模块中的卷积和的大小;
    或者,所述第一卷积模块包括连续的多个卷积层,一个BN层,一个Relu层和一个池化层;所述第一卷积模块的卷积核大小与所述N个卷积模块中的最大的卷积核的大小相等。
  7. 一种乳腺影像识别的装置,其特征在于,包括:
    获取单元,用于获取乳腺影像;
    处理单元,用于根据所述乳腺影像,确定所述乳腺影像中的乳腺病灶的感兴趣区域ROI及所述乳腺的腺体分型;根据所述ROI,确定所述ROI的乳 腺病灶征象;根据所述ROI的乳腺病灶征象,以及所述乳腺的腺体分型,确定所述乳腺影像的分级。
  8. 如权利要求7所述的装置,其特征在于,所述处理单元,具体用于:
    根据第一特征提取模块,对已标记乳腺病灶区域的乳腺病灶进行训练后确定所述ROI的特征图像;所述特征提取模块包括N个卷积模块;所述N个卷积模块的每个卷积模块中依次包括第一卷积层、第二卷积层;所述第一卷积层输出的特征图像的个数小于所述第一卷积层输入的特征图像的个数;所述第二卷积层输出的特征图像的个数大于所述第一卷积层输出的特征图像的个数;N大于0;将所述ROI的特征图像输入至分类模块,确定所述ROI的乳腺病灶征象的置信度。
  9. 一种计算设备,其特征在于,包括至少一个处理单元以及至少一个存储单元,其中,所述存储单元存储有计算机程序,当所述程序被所述处理单元执行时,使得所述处理单元执行权利要求1~7任一权利要求所述方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,其存储有可由计算设备执行的计算机程序,当所述程序在所述计算设备上运行时,使得所述计算设备执行权利要求1~7任一所述方法的步骤。
PCT/CN2019/082690 2018-10-16 2019-04-15 一种乳腺影像识别的方法及装置 WO2020077962A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811202692.2 2018-10-16
CN201811202692.2A CN109447065B (zh) 2018-10-16 2018-10-16 一种乳腺影像识别的方法及装置

Publications (1)

Publication Number Publication Date
WO2020077962A1 true WO2020077962A1 (zh) 2020-04-23

Family

ID=65546304

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082690 WO2020077962A1 (zh) 2018-10-16 2019-04-15 一种乳腺影像识别的方法及装置

Country Status (2)

Country Link
CN (1) CN109447065B (zh)
WO (1) WO2020077962A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739640A (zh) * 2020-06-22 2020-10-02 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) 一种基于乳腺钼靶和mr图像影像组学的风险预测系统
CN111899223A (zh) * 2020-06-30 2020-11-06 上海依智医疗技术有限公司 一种确定乳房图像中回缩征象的方法及装置
CN112699948A (zh) * 2020-12-31 2021-04-23 无锡祥生医疗科技股份有限公司 超声乳腺病灶的分类方法、装置及存储介质
CN113269774A (zh) * 2021-06-09 2021-08-17 西南交通大学 一种mri图像的帕金森病分类及标注病灶区域的方法
CN113539477A (zh) * 2021-06-24 2021-10-22 杭州深睿博联科技有限公司 一种基于解耦机制的病灶良恶性预测方法及装置
CN114067962A (zh) * 2021-11-17 2022-02-18 南通市肿瘤医院 一种基于医院放射科的影像存储传输一体系统
CN114305505A (zh) * 2021-12-28 2022-04-12 上海深博医疗器械有限公司 一种乳腺三维容积超声的ai辅助检测方法及系统
CN114820592A (zh) * 2022-06-06 2022-07-29 北京医准智能科技有限公司 图像处理装置、电子设备及介质
WO2022247007A1 (zh) * 2021-05-25 2022-12-01 平安科技(深圳)有限公司 医学图像分级方法、装置、电子设备及可读存储介质
CN116309585A (zh) * 2023-05-22 2023-06-23 山东大学 基于多任务学习的乳腺超声图像目标区域识别方法及系统
CN117830751A (zh) * 2024-03-06 2024-04-05 苏州凌影云诺医疗科技有限公司 一种基于DenseNet的肠息肉LST形态识别方法和装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109363698B (zh) * 2018-10-16 2022-07-12 杭州依图医疗技术有限公司 一种乳腺影像征象识别的方法及装置
CN109447065B (zh) * 2018-10-16 2020-10-16 杭州依图医疗技术有限公司 一种乳腺影像识别的方法及装置
CN109363697B (zh) * 2018-10-16 2020-10-16 杭州依图医疗技术有限公司 一种乳腺影像病灶识别的方法及装置
CN110110600B (zh) * 2019-04-04 2024-05-24 平安科技(深圳)有限公司 眼部oct图像病灶识别方法、装置及存储介质
CN110111344B (zh) * 2019-05-13 2021-11-16 广州锟元方青医疗科技有限公司 病理切片图像分级方法、装置、计算机设备和存储介质
CN110276411B (zh) 2019-06-28 2022-11-18 腾讯科技(深圳)有限公司 图像分类方法、装置、设备、存储介质和医疗电子设备
CN110738263B (zh) * 2019-10-17 2020-12-29 腾讯科技(深圳)有限公司 一种图像识别模型训练的方法、图像识别的方法及装置
CN111028310B (zh) * 2019-12-31 2023-10-03 上海联影医疗科技股份有限公司 乳腺断层扫描的扫描参数确定方法、装置、终端及介质
CN111950544A (zh) * 2020-06-30 2020-11-17 杭州依图医疗技术有限公司 一种确定病理图像中感兴趣区域的方法及装置
CN111986165B (zh) * 2020-07-31 2024-04-09 北京深睿博联科技有限责任公司 一种乳房图像中的钙化检出方法及装置
CN112348082B (zh) * 2020-11-06 2021-11-09 上海依智医疗技术有限公司 深度学习模型构建方法、影像处理方法及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023023A (zh) * 2015-07-15 2015-11-04 福州大学 一种用于计算机辅助诊断的乳腺b超图像特征自学习提取方法
CN106447682A (zh) * 2016-08-29 2017-02-22 天津大学 基于帧间相关性的乳腺mri病灶的自动分割方法
CN108464840A (zh) * 2017-12-26 2018-08-31 安徽科大讯飞医疗信息技术有限公司 一种乳腺肿块自动检测方法及系统
CN109363698A (zh) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 一种乳腺影像征象识别的方法及装置
CN109447065A (zh) * 2018-10-16 2019-03-08 杭州依图医疗技术有限公司 一种乳腺影像识别的方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339591B (zh) * 2016-08-25 2019-04-02 汤一平 一种基于深度卷积神经网络的预防乳腺癌自助健康云服务系统
CN107220506A (zh) * 2017-06-05 2017-09-29 东华大学 基于深度卷积神经网络的乳腺癌风险评估分析系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023023A (zh) * 2015-07-15 2015-11-04 福州大学 一种用于计算机辅助诊断的乳腺b超图像特征自学习提取方法
CN106447682A (zh) * 2016-08-29 2017-02-22 天津大学 基于帧间相关性的乳腺mri病灶的自动分割方法
CN108464840A (zh) * 2017-12-26 2018-08-31 安徽科大讯飞医疗信息技术有限公司 一种乳腺肿块自动检测方法及系统
CN109363698A (zh) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 一种乳腺影像征象识别的方法及装置
CN109447065A (zh) * 2018-10-16 2019-03-08 杭州依图医疗技术有限公司 一种乳腺影像识别的方法及装置

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739640A (zh) * 2020-06-22 2020-10-02 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) 一种基于乳腺钼靶和mr图像影像组学的风险预测系统
CN111899223A (zh) * 2020-06-30 2020-11-06 上海依智医疗技术有限公司 一种确定乳房图像中回缩征象的方法及装置
CN112699948A (zh) * 2020-12-31 2021-04-23 无锡祥生医疗科技股份有限公司 超声乳腺病灶的分类方法、装置及存储介质
WO2022247007A1 (zh) * 2021-05-25 2022-12-01 平安科技(深圳)有限公司 医学图像分级方法、装置、电子设备及可读存储介质
CN113269774A (zh) * 2021-06-09 2021-08-17 西南交通大学 一种mri图像的帕金森病分类及标注病灶区域的方法
CN113269774B (zh) * 2021-06-09 2022-04-26 西南交通大学 一种mri图像的帕金森病分类及标注病灶区域的方法
CN113539477A (zh) * 2021-06-24 2021-10-22 杭州深睿博联科技有限公司 一种基于解耦机制的病灶良恶性预测方法及装置
CN114067962A (zh) * 2021-11-17 2022-02-18 南通市肿瘤医院 一种基于医院放射科的影像存储传输一体系统
CN114067962B (zh) * 2021-11-17 2024-05-28 南通市肿瘤医院 一种基于医院放射科的影像存储传输一体系统
CN114305505B (zh) * 2021-12-28 2024-04-19 上海深博医疗器械有限公司 一种乳腺三维容积超声的ai辅助检测方法及系统
CN114305505A (zh) * 2021-12-28 2022-04-12 上海深博医疗器械有限公司 一种乳腺三维容积超声的ai辅助检测方法及系统
CN114820592A (zh) * 2022-06-06 2022-07-29 北京医准智能科技有限公司 图像处理装置、电子设备及介质
CN116309585B (zh) * 2023-05-22 2023-08-22 山东大学 基于多任务学习的乳腺超声图像目标区域识别方法及系统
CN116309585A (zh) * 2023-05-22 2023-06-23 山东大学 基于多任务学习的乳腺超声图像目标区域识别方法及系统
CN117830751A (zh) * 2024-03-06 2024-04-05 苏州凌影云诺医疗科技有限公司 一种基于DenseNet的肠息肉LST形态识别方法和装置
CN117830751B (zh) * 2024-03-06 2024-05-07 苏州凌影云诺医疗科技有限公司 一种基于DenseNet的肠息肉LST形态识别方法和装置

Also Published As

Publication number Publication date
CN109447065A (zh) 2019-03-08
CN109447065B (zh) 2020-10-16

Similar Documents

Publication Publication Date Title
WO2020077962A1 (zh) 一种乳腺影像识别的方法及装置
CN109363698B (zh) 一种乳腺影像征象识别的方法及装置
CN109363699B (zh) 一种乳腺影像病灶识别的方法及装置
US10991093B2 (en) Systems, methods and media for automatically generating a bone age assessment from a radiograph
US10580137B2 (en) Systems and methods for detecting an indication of malignancy in a sequence of anatomical images
Valvano et al. Convolutional neural networks for the segmentation of microcalcification in mammography imaging
WO2020077961A1 (zh) 一种乳腺影像病灶识别的方法及装置
Shen et al. An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy
Banerjee et al. Automated 3D segmentation of brain tumor using visual saliency
CN110046627B (zh) 一种乳腺影像识别的方法及装置
Li et al. Texton analysis for mass classification in mammograms
Sarosa et al. Mammogram breast cancer classification using gray-level co-occurrence matrix and support vector machine
Palma et al. Detection of masses and architectural distortions in digital breast tomosynthesis images using fuzzy and a contrario approaches
US20120099771A1 (en) Computer aided detection of architectural distortion in mammography
CN109461144B (zh) 一种乳腺影像识别的方法及装置
Hupse et al. Use of normal tissue context in computer-aided detection of masses in mammograms
US11701066B2 (en) Device and method for detecting clinically important objects in medical images with distance-based decision stratification
Montaha et al. A shallow deep learning approach to classify skin cancer using down-scaling method to minimize time and space complexity
US11302444B2 (en) System and method for computer aided diagnosis of mammograms using multi-view and multi-scale information fusion
Jiang et al. Breast cancer detection and classification in mammogram using a three-stage deep learning framework based on PAA algorithm
CN112053325A (zh) 一种乳腺肿块图像处理和分类系统
Hu et al. A multi-instance networks with multiple views for classification of mammograms
Karale et al. Reduction of false positives in the screening CAD tool for microcalcification detection
CN111062909A (zh) 乳腺肿块良恶性判断方法及设备
Karale et al. A screening CAD tool for the detection of microcalcification clusters in mammograms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19872512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19872512

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19872512

Country of ref document: EP

Kind code of ref document: A1