WO2020077961A1 - Procédé et dispositif d'identification de lésion mammaire à base d'image - Google Patents

Procédé et dispositif d'identification de lésion mammaire à base d'image Download PDF

Info

Publication number
WO2020077961A1
WO2020077961A1 PCT/CN2019/082687 CN2019082687W WO2020077961A1 WO 2020077961 A1 WO2020077961 A1 WO 2020077961A1 CN 2019082687 W CN2019082687 W CN 2019082687W WO 2020077961 A1 WO2020077961 A1 WO 2020077961A1
Authority
WO
WIPO (PCT)
Prior art keywords
breast
image
feature
convolution
images
Prior art date
Application number
PCT/CN2019/082687
Other languages
English (en)
Chinese (zh)
Inventor
魏子昆
华铱炜
蔡嘉楠
Original Assignee
杭州依图医疗技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州依图医疗技术有限公司 filed Critical 杭州依图医疗技术有限公司
Publication of WO2020077961A1 publication Critical patent/WO2020077961A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/502Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of breast, i.e. mammography

Definitions

  • Embodiments of the present invention relate to the technical field of machine learning, and in particular, to a method and device for identifying breast imaging lesions.
  • breast imaging can use low-dose X-rays to examine human breasts. It can detect various breast tumors, cysts and other lesions, which helps to detect breast cancer early and reduce its mortality.
  • Breast imaging is an effective detection method that can be used to diagnose a variety of female breast-related diseases. Of course, the most important use is breast cancer, especially early breast cancer screening. Therefore, if you can effectively detect the early manifestations of various breast cancers on the breast image, it will be of great help to the doctor.
  • Embodiments of the present invention provide a method and device for identifying breast imaging lesions, which are used to solve the problem of low efficiency of the method for judging breast lesions in breast imaging based on doctor experience in the prior art.
  • Embodiments of the present invention provide a method for identifying breast imaging lesions, including:
  • the breast image is input into a feature extraction module to obtain feature images of different sizes of the breast image;
  • the feature extraction module includes N convolution modules;
  • the N convolution modules are down-sampling convolution blocks and / or Or up-sampling convolution block; the size of the feature image extracted by each down-sampling convolution block or up-sampling convolution block is different, each of the N convolution modules includes a first convolution layer, The second convolutional layer; the number of feature images output by the first convolution layer is less than the number of feature images input by the first convolution layer; the number of feature images output by the second convolution layer Is greater than the number of feature images output by the first convolutional layer; N is greater than 0;
  • the breast lesion of the breast image is determined.
  • the acquiring feature images of different sizes of the breast image includes:
  • the first feature image output from the N / 2th down-sampling convolution block is used to sequentially extract N / 2 second feature images of the mammary gland image through the N / 2 up-sampling convolution block, each up-sampling convolution block
  • the sizes of the extracted second feature images are different;
  • N feature images of different sizes of the breast images are determined.
  • a possible implementation manner before the feature processing module, further includes a feature preprocessing module; before the input of the breast image to the feature extraction module, further includes:
  • the feature preprocessing module includes a convolution layer, a BN layer, a Relu layer and a pooling layer; the convolution kernel of the feature preprocessing module The size is larger than the size of the convolution kernel in the N convolution modules;
  • the feature pre-processing module includes multiple consecutive convolutional layers, a BN layer, a Relu layer, and a pooling layer; the size of the convolution kernel of the feature pre-processing module and the N convolution modules The size of the largest convolution kernel is equal.
  • the method before inputting the breast image to the feature extraction module, the method further includes:
  • the mammary gland image according to the picture format corresponding to the at least one set of window width and window level is used as the mammary gland image input to the feature extraction module.
  • the mammary gland image includes mammary gland images with different projection positions of different breasts; the inputting the mammary gland image to a feature extraction module includes:
  • the breast lesion identification frame is determined from the feature image; including:
  • the first breast lesion identification frame is deleted.
  • An embodiment of the present invention provides a device for identifying breast imaging lesions, including:
  • Acquisition unit for acquiring mammary gland image
  • the feature extraction module includes N convolution modules;
  • the N convolution modules are for downsampling Convolution block or up-sampling convolution block; the size of the feature image extracted by each down-sampling convolution block or up-sampling convolution block is different, and each of the N convolution modules includes a first volume Multilayer, second convolutional layer; the number of feature images output by the first convolutional layer is less than the number of feature images input by the first convolutional layer; the feature images output by the second convolutional layer The number of is greater than the number of feature images output by the first convolutional layer; N is greater than 0; for any one of the feature images of different sizes of the breast image, the breast is determined from the feature image Lesion recognition frame; according to the breast lesion recognition frame determined from each characteristic image, determine the breast lesion of the breast image.
  • the processing unit is specifically used to:
  • the breast image is sequentially passed through N / 2 down-sampling convolution blocks to extract N / 2 first feature images of the breast image; the first feature images output from the N / 2 down-sampling convolution block are sequentially passed N / 2 up-sampling convolution blocks extract N / 2 second feature images of the mammography image, the size of the second feature images extracted by each up-sampling convolution block are different; the first feature images of the same size After merging with the second feature image, N feature images of different sizes of the breast images are determined.
  • the mammary gland image includes mammary gland images with different projection positions of different breasts; the processing unit is specifically used for:
  • the feature extraction module Take the breast image of the other breast of the same projection position of the breast image as the reference image of the breast image, and input it to the feature extraction module to obtain a reference feature image; determine the first breast in the feature image The lesion identification frame and the second breast lesion identification frame in the reference feature image; if it is determined that the positions and / or sizes of the first breast lesion identification frame and the second breast lesion identification frame are the same, delete the The first breast lesion identification frame.
  • an embodiment of the present invention provides a computer device including at least one processing unit and at least one storage unit, wherein the storage unit stores a computer program, and when the program is executed by the processing unit, The processing unit executes the steps of the method for identifying breast imaging lesions.
  • an embodiment of the present invention provides a computer-readable storage medium that stores a computer program executable by a computer device, and when the program runs on the computer device, causes the computer device to execute the mammary gland Steps of the method of image lesion identification.
  • an embodiment of the present invention further provides a computer program product
  • the computer program product includes a computer program stored on a computer-readable storage medium
  • the computer program includes program instructions, and when the program instructions are When the device is executed, the computer device is caused to perform the steps of the method for identifying breast imaging lesions.
  • the lesion of the mammary gland can be quickly identified, and the efficiency of identifying the breast lesion is improved.
  • the number of channels output by the first convolution layer is reduced, and the number of channels output by the second convolution layer is increased, so that the effective information in the image is effectively retained during the convolution process. While reducing the amount of parameters, the effectiveness of feature image extraction is improved, thereby improving the accuracy of breast lesion recognition in breast images.
  • FIG. 1a is a schematic diagram of a breast image provided by an embodiment of the present invention.
  • FIG. 1b is a schematic diagram of a breast image provided by an embodiment of the present invention.
  • 1c is a schematic diagram of a breast image provided by an embodiment of the present invention.
  • 1d is a schematic diagram of a breast image provided by an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for identifying a breast imaging lesion according to an embodiment of the present invention
  • 3a is a schematic structural diagram of a feature extraction module provided by an embodiment of the present invention.
  • 3b is a schematic structural diagram of a feature extraction module provided by an embodiment of the present invention.
  • 3c is a schematic structural diagram of a feature extraction module provided by an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a breast imaging lesion recognition provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a breast imaging lesion recognition provided by an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a device for identifying breast imaging lesions according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
  • the breast X-ray image is taken as an example for an exemplary description, and other images will not be repeated here.
  • Breast X-ray images can be used to examine the breasts of humans (mainly women) using low-dose (about 0.7 mSv) X-rays. It can detect various breast tumors, cysts and other lesions, which helps to detect breast cancer early. And reduce its mortality. Some countries encourage older women (generally over 45 years old) to perform mammography regularly (with intervals ranging from one year to five years) to screen for early breast cancer.
  • the mammary gland image generally includes four X-ray images, which are four mammary gland images of the two projection positions of the two breasts (head and tail CC, medial and lateral oblique MLO), as shown in Figure 1a, Figure 1b, Figure 1c, 1d.
  • the prior art often only detects a single type of lesions such as calcifications or masses, and cannot simultaneously detect multiple lesions, and the application range is narrow. At the same time, for many types of lesions such as calcifications, masses, asymmetry, and structural distortion, the accuracy of detection is poor and cannot meet the application requirements.
  • an embodiment of the present invention provides a method for identifying breast imaging lesions, as shown in FIG. 2, including:
  • Step 201 Obtain a breast image
  • Step 202 Input the mammary gland image into a feature extraction module to obtain feature images of different sizes of the mammary gland image;
  • the feature extraction module includes N convolution modules; the N convolution modules are down-sampling convolution blocks and / or up-sampling convolution blocks; each down-sampling convolution block or up-sampling convolution block extraction The size of the feature images of the two are different.
  • Each of the N convolution modules includes a first convolution layer and a second convolution layer; the number of feature images output by the first convolution layer is less than The number of feature images input by the first convolution layer; the number of feature images output by the second convolution layer is greater than the number of feature images output by the first convolution layer; N is greater than 0;
  • the feature extraction module may include three down-sampling convolution blocks.
  • Each convolution module may include a first convolution layer and a second convolution layer.
  • the first convolution layer includes a convolution layer, a normalization (BN) layer connected to the convolution layer, and a connection to the BN layer
  • the activation function layer of Fig. 3a includes a first convolution layer and a second convolution layer.
  • the step of the feature image passing through the convolution module may include:
  • Step 1 input the feature image input by the convolution module to the first convolution layer to obtain the first feature image;
  • the convolution kernel of the first convolution layer may be N1 * m * m * N2;
  • N1 is Describe the number of channels of the feature image input by the convolution module,
  • N2 is the number of channels of the first feature image; N1> N2;
  • Step 2 Input the first feature image into the second convolution layer to obtain the second feature image;
  • the convolution kernel of the first convolution layer may be N2 * m * m * N3;
  • N3 is the channel of the second feature image Number; N3> N2;
  • Step 3 After combining the feature image input by the convolution module and the second feature image, it is determined as the feature image output by the convolution module.
  • the method for determining the feature image corresponding to the breast image described above is only one possible implementation manner. In other possible implementation manners, the feature image corresponding to the breast image may also be determined by other methods, which is not specifically limited.
  • the activation function in the embodiment of the present invention may be multiple types of activation functions, for example, it may be a linear rectification function (Rectified Linear Unit, ReLU), which is not specifically limited;
  • ReLU Rectified Linear Unit
  • the feature extraction module in the embodiment of the present invention may be a feature extraction module in a (2Dimensions, 2D) convolutional neural network.
  • the first convolution layer The size of the convolution kernel can be m * m, and the size of the second convolution layer can be n * n; m and n can be the same or different, which is not limited here; where, m, n is greater than or An integer equal to 1.
  • the number of feature images output by the first convolution layer is less than the number of feature images input by the first convolution layer; the number of feature images output by the second convolution layer is greater than the first convolution layer The number of output feature images.
  • a possible implementation manner as shown in FIG. 3c, further includes a third convolution layer between the first convolution layer and the second convolution layer;
  • the feature image input by the three convolution layers is the image output by the first convolution layer, and the feature image output by the third convolution layer is the image input by the second convolution layer.
  • the size of the convolution kernel of the third convolutional layer may be k * k, and k may be the same as m or n, or may be different, which is not limited herein.
  • the size of the convolution kernel of the first convolution layer is 3 * 3; the size of the convolution kernel of the second convolution layer is 3 * 3; the third convolution layer The size of the convolution kernel is 1 * 1.
  • the perception field of feature extraction can be effectively improved, which is beneficial to improve the accuracy of breast lesion recognition.
  • the feature images of different sizes may be feature images of different pixels, for example, the feature image with pixels 500 ⁇ 500 and the feature image with pixels 1000 ⁇ 1000 are feature images with different sizes.
  • a pre-trained breast lesion detection model is used to extract feature images of different sizes of breast images.
  • the model is determined after training a plurality of labeled breast images using a 2D convolutional neural network.
  • the image is scaled to a specific size so that the scale of the pixels in each direction is the same as the actual length.
  • the feature extraction module includes N / 2 down-sampling convolution blocks and N / 2 up-sampling convolution blocks; and acquiring feature images of different sizes of the breast image includes:
  • the first feature image output from the N / 2th down-sampling convolution block is used to sequentially extract N / 2 second feature images of the mammary gland image through the N / 2 up-sampling convolution block, each up-sampling convolution block
  • the sizes of the extracted second feature images are different;
  • N feature images of different sizes of the breast images are determined.
  • the feature extraction module also includes a feature preprocessing module before; the feature preprocessing module includes a convolution layer and a BN layer, One Relu layer and one pooling layer; the size of the convolution kernel of the feature preprocessing module is larger than that of any of the N convolution modules.
  • the size of the convolution kernel of the convolution layer may be 7 * 7, and the interval is 2 pixels.
  • the pooling layer is 2 * 2 maximum pooling.
  • the feature preprocessing module includes a plurality of continuous convolutional layers, a BN layer, a Relu layer, and a pooling layer; the size of the convolution kernel of the feature preprocessing module and the N The largest convolution kernel in each convolution module has the same size.
  • the step of the feature image passing through the feature preprocessing module may include: inputting the breast image to the feature preprocessing module to obtain a preprocessed feature image; and using the preprocessed feature image as an input of the feature extraction module.
  • Step 203 For any one of the feature images of different sizes of the breast image, determine a breast lesion recognition frame from the feature image.
  • a pre-trained breast lesion detection model is used to determine the breast lesion recognition frame from the feature image.
  • the breast lesion detection model is determined after training multiple breast images of the marked breast lesion using a 2D convolutional neural network. .
  • the area framed by the breast lesion identification frame determined from the feature image does not necessarily contain breast lesions, so each breast lesion identification frame needs to be screened according to the breast lesion probability of the breast lesion identification frame, and the breast lesion probability is less than the preset threshold
  • the breast lesion identification frame is deleted, where the breast lesion probability is the probability that the area framed by the breast lesion identification frame is the breast lesion.
  • Step 204 Determine the breast lesion of the breast image according to the breast lesion identification frame determined from each feature image.
  • the recognition frame is output as the breast lesion in the breast image
  • the output breast lesion parameters include the central coordinates of the breast lesion and the diameter of the breast lesion, wherein the central coordinates of the breast lesion are the breast lesion identification
  • the center coordinate of the frame, the diameter of the breast lesion is the distance from the center of the breast lesion identification frame to one of the faces.
  • both large-sized breast lesions and small-sized breast lesions can be detected, which improves the detection of breast lesions Precision.
  • the method of automatically detecting the breast lesion in the present application effectively improves the recognition efficiency of the breast lesion.
  • the breast lesion identification frame determined from each feature image may have multiple identification frames corresponding to one breast lesion, if the number of breast lesions in the breast image is directly determined according to the number of breast lesion identification frames, the number of detected breast lesions will result There is a large deviation, so it is necessary to convert each feature image into a feature image of the same size and align it, and then screen the breast lesion recognition frame determined from each feature image, and determine the screened breast lesion recognition frame as the breast Breast lesions in the image.
  • the breast image includes breast images of different breasts with different projection positions;
  • the input of the breast image to the feature extraction module includes:
  • the breast lesion identification frame is determined from the feature image; including:
  • the first breast lesion identification frame is deleted.
  • the screening process of the breast lesion identification frame includes the following steps, as shown in Figure 3:
  • Step 301 Determine the breast lesion identification frame with the highest probability of breast lesion from the breast lesion identification frame of each feature image.
  • Step 302 Calculate the intersection ratio of the breast lesion identification frame with the largest breast lesion probability and the other breast lesion identification frames.
  • Step 303 Delete the identification frame of other breast lesions whose cross-combination ratio is greater than a preset threshold.
  • Step 304 Determine the breast lesion identification frame with the highest probability of breast lesions from the remaining other breast lesion identification frames, and repeat the screening process of the breast lesion identification frame until there are no other remaining breast lesion identification frames.
  • the screening process of the breast lesion identification frame described above will be described in conjunction with specific examples, and the breast lesion identification frames identified in each feature image are set to A, B, C, D, E, and F, and the above breast lesion identification frames
  • the above breast lesion recognition frame is sorted according to the probability of breast lesion from large to small: E, C, A, B, D, F, after sorting, we can know the breast lesion with the highest probability of breast lesion in the breast lesion recognition frame of each feature image
  • the recognition frame is E, and then calculate the intersection ratio IOU between the breast lesion recognition frame E and each other breast lesion recognition frame.
  • the calculation method of the intersection ratio is shown in equation (1):
  • m is the breast lesion identification frame with the highest probability of breast lesions
  • n is the breast lesion identification frame compared with the breast lesion identification frame m
  • IOU is the intersection ratio between the breast lesion identification frame m and the breast lesion identification frame n.
  • the preset threshold to 0.5, if the intersection ratio between the breast lesion identification frame C and the breast lesion identification frame E is greater than 0.5, the intersection ratio between the breast lesion identification frame A and the breast lesion identification frame E is greater than 0.5, the breast If the intersection ratio of the lesion identification frame B, the breast lesion identification frame D, the breast lesion identification frame F and the breast lesion identification frame E is less than 0.5, delete the breast lesion identification frame C and the breast lesion identification frame A, and identify the breast lesion Box E is identified as a breast lesion in the breast image.
  • the remaining other breast lesion recognition frames B, D, F are sorted according to the breast lesion probability, and the breast lesion recognition frame with the highest breast lesion probability is determined as the breast lesion recognition frame B, and then the breast lesion recognition frame B and the breast lesion are calculated The cross ratio between the recognition frames D and the cross ratio between the breast lesion recognition frame B and the breast lesion recognition frame F. If the intersection ratio between the breast lesion identification frame B and the breast lesion identification frame D is greater than 0.5, and the intersection ratio between the breast lesion identification frame B and the breast lesion identification frame F is less than 0.5, then delete the breast lesion identification frame D, replace The breast lesion recognition frame B and the breast lesion recognition frame F are determined as breast lesions in the breast image.
  • the breast lesion identification frame identified in each feature image is screened based on the breast lesion probability of the breast lesion identification frame and the intersection and comparison between the breast lesion identification frames, avoiding repeated detection and output of the same breast lesion in the breast image, Improve the accuracy of detecting the number of breast lesions in breast images.
  • Step 401 Obtain a breast image as a training sample.
  • the acquired multiple breast images can be used directly as training samples, or the acquired multiple breast images can be enhanced to expand the data volume of the training samples. Enhancement operations include, but are not limited to: randomly setting pixels up, down, left, and right (Such as 0-20 pixels), random rotation setting angle (such as -15-15 degrees), random zoom setting multiple (such as 0.85-1.15 times).
  • Step 402 Manually mark the breast lesions in the training sample.
  • the breast lesions in the training sample can be marked by doctors and other professionals, and the content of the marking includes the central coordinates of the breast lesions and the diameter of the breast lesions. Specifically, multiple doctors can mark the breast lesions, and determine the final breast lesions and the parameters of the breast lesions through a multiple-vote synthesis method, and the results are saved in the form of a mask.
  • the manual labeling of the breast lesions in the training sample and the training sample are in no particular order of enhancement. You can manually mark the breast lesions in the training sample, and then the training sample of the labeled breast lesions can be enhanced. The training samples are enhanced, and then the training samples after the enhancement operations are manually marked.
  • step 403 the training samples are input to the convolutional neural network for training to determine the breast lesion recognition model.
  • the structure of the convolutional neural network includes an input layer, a down-sampling convolution block, an up-sampling convolution block, a target detection network, and an output layer. Pre-process the training samples and input them into the convolutional neural network, calculate the loss function of the output breast lesions and the mask image of the pre-labeled training samples, and then iterate iteratively using the back propagation algorithm and the sgd optimization algorithm to determine the breast lesions Detection model.
  • the process of extracting feature images of different sizes of breast images using the breast lesion detection model determined by the above training includes the following steps:
  • the mammary gland image is successively passed through N / 2 down-sampling convolution blocks to extract the first feature images of the N mammary gland images.
  • the size of the first feature image extracted by each down-sampling convolution block is different, and N / 2 is greater than 0.
  • the down-sampling convolution block includes a first convolution layer and a second convolution layer, a group connection layer, a front-back connection layer, and a down-sampling layer.
  • Step 2 The first feature image output from the N / 2 down-sampling convolution block is sequentially used to extract the second feature image of N / 2 mammary gland images through the N / 2 up-sampling convolution block.
  • the size of the second feature image extracted by each up-sampling convolution block is different.
  • the up-sampling convolution block includes a convolution layer, a group connection layer, a front-back connection layer, an up-sampling layer, and a synthesis connection layer.
  • Convolution layer includes convolution operation, batch normalization layer and RELU layer.
  • step three after combining the first feature image and the second feature image with the same size, feature images of different sizes of N / 2 breast images are determined.
  • the first feature image and the second feature image of the same size are combined through the up-sampling convolution block to determine feature images of different sizes.
  • the number of channels of the first feature image and the second feature image are combined, and the size of the feature image obtained after the merging is the same as the size of the first feature image and the second feature image.
  • the process of determining the breast lesion recognition frame from the feature image using the breast lesion detection model determined by the above training includes the following steps:
  • Step 1 For any pixel in the feature image, with the pixel as the center, diffuse to the surroundings to determine the first area.
  • Step 2 Set multiple preset frames in the first area according to preset rules.
  • the preset frame can be set to various shapes.
  • the preset rule may be that the center of the preset frame coincides with the center of the first area, or that the corner of the preset frame coincides with the angle of the first area, and so on.
  • the way to select the preset frame of the breast lesion is that, for each pixel of each feature map, it is considered as an anchor point. Set multiple preset frames with different aspect ratios on each anchor point.
  • the convolution of the feature map predicts a coordinate and size offset and confidence, and the preset frame is determined based on the coordinate and size offset and confidence.
  • Step 3 For any preset frame, predict the position deviation of the preset frame from the first area.
  • Step 4 Adjust the preset frame according to the position deviation to determine the breast lesion identification frame, and predict the breast lesion probability of the breast lesion identification frame.
  • the breast lesion probability is the probability that the area selected by the breast lesion identification frame is the breast lesion.
  • the specific training process may include: inputting the training data image to the above-mentioned convolutional neural network for calculation.
  • multiple images of different window widths and positions of the lesion are introduced.
  • the prediction frame set with the highest confidence and the prediction frame set with the maximum coincidence with the training sample are selected.
  • the cross-entropy of the confidence of the prediction frame and the labeling of the sample and the cross-entropy of the labeled lesion of the training sample and the offset of the prediction frame are used as the loss function.
  • the training optimization algorithm uses the sgd algorithm with momentum and step attenuation.
  • the input image is preprocessed to improve the effect of feature extraction.
  • the acquiring breast image includes:
  • Step 1 Determine the binary image of the breast image according to Gaussian filtering
  • Step 2 Obtain the connected area of the binarized image, and use the area with the largest area in the connected area corresponding to the breast image as the segmented breast image;
  • Step 3 Add the segmented breast image to a preset image template to generate a pre-processed breast image; and use the pre-processed breast image as the breast image input to the feature extraction module.
  • the input of the preprocessing module is a breast image saved in Dicom format.
  • Preprocessing can include gland segmentation and image normalization; the main purpose of gland segmentation is to extract the breast part of the input breast image to remove other unrelated interference images; image normalization is to normalize the image into Unified format images, specifically, include:
  • the specific binarized threshold can be obtained by finding the maximum class interval of the grayscale histogram of the image.
  • the binarized result can be obtained by flooding to obtain independent regional blocks, and the area of each regional block is counted; the area on the image corresponding to the largest regional block is used as Segmented breast image.
  • the preset image template may be a square image of a black bottom plate; specifically, the obtained divided breast image may be expanded into a 1: 1 square image by adding a black border.
  • the output breast image can be scaled by pixels, for example, the image difference can be scaled to 4096 pixels ⁇ 4096 pixels.
  • the window width and position of the mammary gland can be adjusted to obtain a better identification effect of breast lesion identification.
  • the method before inputting the breast image to the feature extraction module, the method further includes:
  • the mammary gland image according to the picture format corresponding to the at least one set of window width and window level is used as the mammary gland image input to the feature extraction module.
  • the dicom image can be converted into a png image through three sets of window width and window levels.
  • the first set of window width is 4000 and the window level is 2000; the second set of window width is 1000; the window level is 2000 ;
  • the third group has a window width of 1500 and a window level of 1500.
  • an embodiment of the present invention provides a device for identifying breast lesions. As shown in FIG. 5, the device can perform the flow of a method for identifying breast lesions.
  • the device includes an acquiring unit 501 and a processing unit 502.
  • the obtaining unit 501 is used to obtain breast images
  • the processing unit 502 is configured to input the breast image into a feature extraction module to obtain feature images of different sizes of the breast image;
  • the feature extraction module includes N convolution modules;
  • the N convolution modules are as follows Sampling convolutional blocks and / or upsampling convolutional blocks; the size of the feature image extracted by each downsampling convolutional block or upsampling convolutional block is different, and each of the N convolutional modules includes The first convolution layer and the second convolution layer; the number of feature images output by the first convolution layer is less than the number of feature images input by the first convolution layer; the second convolution layer outputs The number of feature images of is greater than the number of feature images output by the first convolutional layer; N is greater than 0; for any one of the feature images of different sizes of the breast image, from the feature image Identify the breast lesion recognition frame; determine the breast lesion of the breast image according to the breast lesion recognition frame determined from each feature image.
  • processing unit 502 is specifically configured to:
  • the breast image is sequentially passed through N / 2 down-sampling convolution blocks to extract N / 2 first feature images of the breast image; the first feature images output from the N / 2 down-sampling convolution block are sequentially passed N / 2 up-sampling convolution blocks extract N / 2 second feature images of the mammography image, the size of the second feature images extracted by each up-sampling convolution block are different; the first feature images of the same size After merging with the second feature image, N feature images of different sizes of the breast images are determined.
  • the processing unit 502 is specifically configured to:
  • the feature preprocessing module includes a convolution layer, a BN layer, a Relu layer and a pooling layer; the convolution kernel of the feature preprocessing module The size is larger than the size of the convolution kernel in the N convolution modules;
  • the feature pre-processing module includes multiple consecutive convolutional layers, a BN layer, a Relu layer, and a pooling layer; the size of the convolution kernel of the feature pre-processing module and the N convolution modules The size of the largest convolution kernel is equal.
  • the obtaining unit 501 is configured to:
  • the processing unit 502 is specifically used for:
  • the mammary gland image includes mammary gland images with different projection positions of different breasts; the acquiring unit 501 is configured to:
  • the feature extraction module Take the breast image of the other breast of the same projection position of the breast image as the reference image of the breast image, and input it to the feature extraction module to obtain a reference feature image; determine the first breast in the feature image The lesion identification frame and the second breast lesion identification frame in the reference feature image; if it is determined that the positions and / or sizes of the first breast lesion identification frame and the second breast lesion identification frame are the same, delete the The first breast lesion identification frame.
  • an embodiment of the present invention provides a computer device including at least one processing unit and at least one storage unit, wherein the storage unit stores a computer program when the program is executed by the processing unit So that the processing unit executes the steps of the method for identifying breast lesions.
  • FIG. 6 it is a schematic diagram of the hardware structure of the computer device described in the embodiment of the present invention.
  • the computer device may specifically be a desktop computer, a portable computer, a smart phone, a tablet computer, or the like.
  • the computer device may include a memory 801, a processor 802, and a computer program stored on the memory.
  • the memory 801 may include a read-only memory (ROM) and a random access memory (RAM), and provide the processor 802 with program instructions and data stored in the memory 801.
  • the computer equipment described in the embodiments of the present application may further include an input device 803 and an output device 804.
  • the input device 803 may include a keyboard, a mouse, a touch screen, etc .
  • the output device 804 may include a display device, such as a liquid crystal display (Liquid Crystal Display, LCD), a cathode ray tube (Cathode Ray Tube, CRT), a touch screen, and the like.
  • the memory 801, the processor 802, the input device 803, and the output device 804 may be connected through a bus or in other ways. In FIG. 6, connection through a bus is used as an example.
  • the processor 802 calls the program instructions stored in the memory 801 and executes the method for identifying breast lesions provided in the foregoing embodiments according to the obtained program instructions.
  • an embodiment of the present invention also provides a computer-readable storage medium that stores a computer program executable by a computer device, and when the program runs on the computer device, causes the computer device to execute a breast Steps of the method of lesion identification.
  • an embodiment of the present invention also provides a computer program product, the computer program product includes a computer program stored on a computer-readable storage medium, the computer program includes program instructions, when the program instructions When executed by a computer device, the computer device is caused to perform the steps of the method for identifying breast imaging lesions.
  • the embodiments of the present invention may be provided as methods or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

Cette invention concerne un procédé et un dispositif d'identification de lésion mammaire à base d'image, se rapportant au domaine de la technologie de l'apprentissage automatique. Le procédé comprend les étapes consistant à : acquérir une image de sein (201), entrer l'image de sein dans un module d'extraction de caractéristiques pour acquérir des images caractéristiques de différentes tailles de l'image de sein (202) ; par rapport à l'une quelconque des images caractéristiques de différentes tailles de l'image de sein, déterminer une trame d'identification de lésion mammaire (203) à partir de cette image caractéristique ; en fonction de la trame d'identification de lésion mammaire déterminée à partir de chaque image caractéristique, déterminer la lésion mammaire de l'image de sein ( 204).
PCT/CN2019/082687 2018-10-16 2019-04-15 Procédé et dispositif d'identification de lésion mammaire à base d'image WO2020077961A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811201699.2A CN109363697B (zh) 2018-10-16 2018-10-16 一种乳腺影像病灶识别的方法及装置
CN201811201699.2 2018-10-16

Publications (1)

Publication Number Publication Date
WO2020077961A1 true WO2020077961A1 (fr) 2020-04-23

Family

ID=65400521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082687 WO2020077961A1 (fr) 2018-10-16 2019-04-15 Procédé et dispositif d'identification de lésion mammaire à base d'image

Country Status (2)

Country Link
CN (1) CN109363697B (fr)
WO (1) WO2020077961A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109363697B (zh) * 2018-10-16 2020-10-16 杭州依图医疗技术有限公司 一种乳腺影像病灶识别的方法及装置
TWI769370B (zh) * 2019-03-08 2022-07-01 太豪生醫股份有限公司 病灶偵測裝置及其方法
CN110400302B (zh) * 2019-07-25 2021-11-09 杭州依图医疗技术有限公司 一种确定、显示乳房图像中病灶信息的方法及装置
CN110930385A (zh) * 2019-11-20 2020-03-27 北京推想科技有限公司 乳房肿块检测定位方法和装置
CN111325743A (zh) * 2020-03-05 2020-06-23 北京深睿博联科技有限责任公司 基于联合征象的乳腺x射线影像分析方法和装置
CN116258717B (zh) * 2023-05-15 2023-09-08 广州思德医疗科技有限公司 病灶识别方法、装置、设备和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326931A (zh) * 2016-08-25 2017-01-11 南京信息工程大学 基于深度学习的乳腺钼靶图像自动分类方法
CN106682435A (zh) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 一种多模型融合自动检测医学图像中病变的系统及方法
CN107045720A (zh) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 用于识别眼底图像病变的人工神经网络及系统
US20170249739A1 (en) * 2016-02-26 2017-08-31 Biomediq A/S Computer analysis of mammograms
CN109363698A (zh) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 一种乳腺影像征象识别的方法及装置
CN109363699A (zh) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 一种乳腺影像病灶识别的方法及装置
CN109363697A (zh) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 一种乳腺影像病灶识别的方法及装置
CN109447065A (zh) * 2018-10-16 2019-03-08 杭州依图医疗技术有限公司 一种乳腺影像识别的方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379961B2 (en) * 2008-07-03 2013-02-19 Nec Laboratories America, Inc. Mitotic figure detector and counter system and method for detecting and counting mitotic figures
US9430829B2 (en) * 2014-01-30 2016-08-30 Case Western Reserve University Automatic detection of mitosis using handcrafted and convolutional neural network features
US10424069B2 (en) * 2017-04-07 2019-09-24 Nvidia Corporation System and method for optical flow estimation
CN107133933B (zh) * 2017-05-10 2020-04-28 广州海兆印丰信息科技有限公司 基于卷积神经网络的乳腺x线图像增强方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170249739A1 (en) * 2016-02-26 2017-08-31 Biomediq A/S Computer analysis of mammograms
CN106326931A (zh) * 2016-08-25 2017-01-11 南京信息工程大学 基于深度学习的乳腺钼靶图像自动分类方法
CN106682435A (zh) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 一种多模型融合自动检测医学图像中病变的系统及方法
CN107045720A (zh) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 用于识别眼底图像病变的人工神经网络及系统
CN109363698A (zh) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 一种乳腺影像征象识别的方法及装置
CN109363699A (zh) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 一种乳腺影像病灶识别的方法及装置
CN109363697A (zh) * 2018-10-16 2019-02-22 杭州依图医疗技术有限公司 一种乳腺影像病灶识别的方法及装置
CN109447065A (zh) * 2018-10-16 2019-03-08 杭州依图医疗技术有限公司 一种乳腺影像识别的方法及装置

Also Published As

Publication number Publication date
CN109363697B (zh) 2020-10-16
CN109363697A (zh) 2019-02-22

Similar Documents

Publication Publication Date Title
WO2020077962A1 (fr) Procédé et dispositif de reconnaissance d'image de sein
CN109363699B (zh) 一种乳腺影像病灶识别的方法及装置
CN109363698B (zh) 一种乳腺影像征象识别的方法及装置
WO2020077961A1 (fr) Procédé et dispositif d'identification de lésion mammaire à base d'image
US10991093B2 (en) Systems, methods and media for automatically generating a bone age assessment from a radiograph
Li et al. DeepSEED: 3D squeeze-and-excitation encoder-decoder convolutional neural networks for pulmonary nodule detection
US10482633B2 (en) Systems and methods for automated detection of an indication of malignancy in a mammographic image
US9480439B2 (en) Segmentation and fracture detection in CT images
CN110046627B (zh) 一种乳腺影像识别的方法及装置
Banerjee et al. Automated 3D segmentation of brain tumor using visual saliency
US20170109880A1 (en) System and method for blood vessel analysis and quantification in highly multiplexed fluorescence imaging
CN111325739A (zh) 肺部病灶检测的方法及装置,和图像检测模型的训练方法
Li et al. Texton analysis for mass classification in mammograms
JP2007307358A (ja) 画像処理方法および装置ならびにプログラム
CN109461144B (zh) 一种乳腺影像识别的方法及装置
CN110689525A (zh) 基于神经网络识别淋巴结的方法及装置
US11684333B2 (en) Medical image analyzing system and method thereof
CN112053325A (zh) 一种乳腺肿块图像处理和分类系统
WO2020168647A1 (fr) Procédé de reconnaissance d'image et dispositif associé
Montaha et al. A shallow deep learning approach to classify skin cancer using down-scaling method to minimize time and space complexity
US20230334660A1 (en) Digital tissue segmentation and viewing
TWI587844B (zh) 醫療影像處理裝置及其乳房影像處理方法
US20210398282A1 (en) Digital tissue segmentation using image entropy
CN115423806B (zh) 一种基于多尺度跨路径特征融合的乳腺肿块检测方法
CN116229236A (zh) 一种基于改进YOLO v5模型的结核杆菌检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19874375

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19874375

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19874375

Country of ref document: EP

Kind code of ref document: A1