CN111127400A - Method and device for detecting breast lesions - Google Patents

Method and device for detecting breast lesions Download PDF

Info

Publication number
CN111127400A
CN111127400A CN201911201347.1A CN201911201347A CN111127400A CN 111127400 A CN111127400 A CN 111127400A CN 201911201347 A CN201911201347 A CN 201911201347A CN 111127400 A CN111127400 A CN 111127400A
Authority
CN
China
Prior art keywords
image
breast
current
determining
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911201347.1A
Other languages
Chinese (zh)
Inventor
鄢照龙
孙瑞超
王永贞
李勇
陈晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lanyun Medical Image Co ltd
Original Assignee
Shenzhen Lanyun Medical Image Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lanyun Medical Image Co ltd filed Critical Shenzhen Lanyun Medical Image Co ltd
Priority to CN201911201347.1A priority Critical patent/CN111127400A/en
Publication of CN111127400A publication Critical patent/CN111127400A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a breast lesion detection method and a breast lesion detection device, wherein the method comprises the following steps: establishing a corresponding relation between the image characteristics of the mammary gland image and the mammary gland lesion area by utilizing the self-learning capability of the artificial neural network; acquiring current image characteristics of a current breast image; determining a current breast lesion area corresponding to the current image characteristic according to the corresponding relation; determining a current breast lesion region corresponding to the current image feature, comprising: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area. The method can improve the accuracy of judging breast lesions such as breast lumps and improve the user experience effect.

Description

Method and device for detecting breast lesions
Technical Field
The invention relates to the technical field of image processing, in particular to a breast lesion detection method and a breast lesion detection device.
Background
Breast cancer is a common malignant tumor, and early diagnosis and treatment are key to reducing the death rate of breast cancer. Lesion regions in breast images include forms of masses, calcifications, bilateral asymmetry, structural distortion, etc., where masses and clusters of calcifications are the most common imaging signs of breast cancer, and therefore, automatic detection of masses and calcifications is also two main aspects of computer-aided diagnosis systems. Among them, the lump has been a difficult point of the computer aided diagnosis system due to its fuzzy edge, different shape, and low contrast with the surrounding tissues.
Artificial neural networks, such as deep learning, process input information layer by layer, thereby converting an input representation, which is less closely related to an output target, into a representation, which is more closely related to the output target, and enabling tasks that were difficult to complete based on only the last layer of output mapping. I.e. gradually converting the initial "low-level" representation of features into a "high-level" representation of features by a multi-level process. Therefore, deep learning can be considered as "feature learning" or "meaning learning". The Convolutional Neural Network (CNN) is currently considered to be a better one in a deep learning structure, and because the Convolutional Neural Network completely automates feature engineering, a multi-stage process of traditional machine learning is replaced into a simple end-to-end process without manually designing features, and therefore, better performance is shown on many problems.
In the prior art, for example, in "a method for acquiring, detecting and analyzing breast mass images", the applicant: the center for tumor prevention and treatment of Zhongshan university. The method comprises the steps of firstly, manually collecting a breast lump image, extracting an interested region by using a clustering algorithm Kmeans, extracting lump features, and obtaining threshold values of lumps and non-lumps through threshold value analysis. Then, carrying out binarization processing on any input mammary gland image by using a global gray threshold method, clustering by using a Kmeans method, extracting a cluster with the maximum gray level as a region of interest, calculating the characteristics of the region of interest, and removing false positive regions according to the characteristic threshold of the tumor, thereby determining the real tumor. The method has the following defects: the threshold value for distinguishing the tumor from the non-tumor is only derived from the gray information of manual statistics, the threshold value size is closely related to the number of the statistical data and the selected characteristics, and the gray characteristics based on the statistical information do not have good representation in distinguishing the tumor from the non-tumor. Therefore, the method has limited accuracy in judging the breast tumor, so that the accuracy of the detection structure of the breast tumor is poor, and the user experience effect is poor.
Disclosure of Invention
The embodiment of the invention provides a method for detecting breast lesions, which has the advantages of achieving the effect of good judgment accuracy on breast lesions such as breast lumps and improving the user experience effect.
Correspondingly, the embodiment of the invention also provides a breast lesion detection device, which is used for ensuring the realization and the application of the method.
In order to solve the above problems, the invention discloses a method for detecting breast lesions, which specifically comprises the following steps: establishing a corresponding relation between the image characteristics of the mammary gland image and the mammary gland lesion area by utilizing the self-learning capability of the artificial neural network; acquiring current image characteristics of a current breast image; determining a current breast lesion area corresponding to the current image characteristic according to the corresponding relation;
the step of determining a current breast lesion region corresponding to the current image feature includes: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area.
Optionally, the breast image comprises: raw breast images and/or pre-processed images; the preprocessing image is a mammary gland image generated after preprocessing the original mammary gland image; and/or the presence of a gas in the gas,
the image features include: at least one of color features, texture features, shape features, spatial relationship features; and/or the presence of a gas in the gas,
the corresponding relation comprises: a functional relationship; the image characteristics are input parameters of the functional relationship, and the breast lesion area is output parameters of the functional relationship; the step of determining the current breast lesion region corresponding to the current image feature further includes: and when the corresponding relation comprises a functional relation, inputting the current image characteristics into the functional relation, and determining the output parameter of the functional relation as the current breast lesion area.
Optionally, the step of preprocessing the original breast image includes: acquiring the original breast image; determining the effective mammary gland area range of the original mammary gland image; and generating a preprocessing image according to the effective mammary gland area range.
Optionally, the step of establishing a correspondence between image features of the breast image and the breast lesion region includes: acquiring sample data for establishing a corresponding relation between the image characteristics and the breast lesion area; analyzing the characteristics and the rules of the image characteristics, and determining the network structure and the network parameters of the artificial neural network according to the characteristics and the rules; training and testing the network structure and the network parameters by using the sample data, and determining the corresponding relation between the image characteristics and the breast lesion area.
Optionally, the step of acquiring sample data for establishing a correspondence between the image features and the breast lesion region includes: acquiring image characteristics and breast lesion areas of a plurality of different breast images; analyzing the image characteristics, and selecting data related to the breast lesion area as the image characteristics by combining prestored lesion position information labeled by a doctor; and taking the data pair formed by the breast lesion area and the selected image characteristics as sample data.
Optionally, the network structure includes: at least one of a BP neural network, a CNN neural network, an RNN neural network, and a residual error neural network; wherein the CNN neural network comprises: at least one of a Faster R-CNN neural network, a VGG neural network; and/or the presence of a gas in the gas,
the network parameters comprise: at least one of the number of input nodes, the number of output nodes, the number of hidden layers, the number of hidden nodes, an initial weight and a bias value.
Optionally, the step of training the network structure and the network parameters includes: selecting a part of data in the sample data as a training sample, inputting the image characteristics in the training sample into the network structure, and training through an activation function of the network structure and the network parameters to obtain an actual training result; determining whether an actual training error between the actual training result and a corresponding breast lesion area in the training sample meets a set training error; determining that the training of the network structure and the network parameters is completed when the actual training error satisfies the set training error; and/or the presence of a gas in the gas,
the step of testing the network structure and the network parameters includes: selecting another part of data in the sample data as a test sample, inputting the image characteristics in the test sample into the trained network structure, and testing by using the activation function and the trained network parameters to obtain an actual test result; determining whether an actual test error between the actual test result and a corresponding breast lesion area in the test sample satisfies a set test error; and when the actual test error meets the set test error, determining that the test on the network structure and the network parameters is finished.
The embodiment of the invention also provides a breast lesion detection device, which specifically comprises: the relationship establishing module is used for establishing the corresponding relationship between the image characteristics of the mammary gland image and the mammary gland lesion area by utilizing the self-learning capability of the artificial neural network; the acquisition module is used for acquiring the current image characteristics of the current mammary gland image; the determining module is used for determining the current breast lesion area corresponding to the current image characteristic according to the corresponding relation; the determining module includes: and the lesion determining submodule is used for determining the breast lesion area corresponding to the image feature which is the same as the current image feature in the corresponding relation as the current breast lesion area.
The embodiment of the invention also discloses a terminal, which comprises: one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the terminal to perform one or more breast lesion detection methods as described in embodiments of the invention.
The embodiment of the invention also discloses a computer readable storage medium, which stores a computer program for enabling a processor to execute the breast lesion detection method according to the embodiment of the invention.
Compared with the prior art, the embodiment of the invention has the following advantages:
in the embodiment of the invention, the self-learning capability of the artificial neural network can be utilized to establish the corresponding relation between the image characteristics of the mammary gland image and the mammary gland lesion area; then obtaining the current image characteristics of the current mammary gland image; determining a current breast lesion area corresponding to the current image characteristic according to the corresponding relation; wherein the step of determining a current breast lesion region corresponding to the current image feature comprises: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area. The method can combine a plurality of image characteristics of the breast image and establish the corresponding relation between the image characteristics and the breast lesion area to detect the breast lesion area, so that the accuracy of judging the breast lesion can be improved, and the user experience effect is improved.
Drawings
FIG. 1 is a flowchart illustrating the steps of one embodiment of a method for detecting breast lesions in accordance with the present invention;
FIG. 2 is a flow chart illustrating the steps of one embodiment of pre-processing the raw breast image in the method of the present invention;
FIG. 3 is a flowchart illustrating steps of one embodiment of the method of the present invention for establishing correspondence between image characteristics of a breast image and a breast lesion region;
FIG. 4 is a flowchart illustrating the steps of obtaining sample data for establishing correspondence between the image features and the breast lesion region according to an embodiment of the present invention;
FIG. 5 is a flowchart of the steps in one embodiment of the method of the present invention for training the network structure and the network parameters;
FIG. 6 is a flow chart of steps in one embodiment of a method of the present invention for testing the network configuration and the network parameters;
FIG. 7(a) is an image of a portion of a training sample of a breast lesion detection method of the present invention;
FIG. 7(b) is a breast image of a framed tumor of a breast lesion detection method of the present invention;
FIG. 7(c) is an information diagram of an xml format file of a breast lesion detection method according to the present invention;
fig. 8(a) is an image of a detection result of a breast lesion detection method of the present invention;
fig. 8(b) is an image of another detection result of a breast lesion detection method of the present invention;
FIG. 9 is a block diagram of a breast lesion detection apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a computer device for implementing the breast lesion detection apparatus according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a breast lesion detection method of the present invention is shown, which may specifically include steps 101-103:
step 101, establishing a corresponding relation between image characteristics of a mammary gland image and a mammary gland lesion area by utilizing the self-learning capability of the artificial neural network.
In the embodiment of the invention, the self-learning function of the artificial neural network can be utilized, and the corresponding relation function between the image characteristics of the mammary gland image and the mammary gland lesion area can be mastered by training and learning the acquired data. For example, an artificial neural network algorithm is used for analyzing the image change rule of the mammary gland lesion process of the mammary gland image, and the mapping rule between the mammary gland lesion area condition and the image characteristic in the mammary gland image is found through the self-learning and self-adaptive characteristics of the artificial neural network.
For example, an artificial neural network algorithm can be used to perform learning and training on the artificial neural network by collecting and summarizing image data of different breast images (for example, the breast images can include but are not limited to one or more of breast images of different patients, breast images of the same patient in different periods, and the like), selecting image characteristic parameters and breast lesion region parameters of a plurality of breast images as sample data, and fitting the artificial neural network to the corresponding relationship between the image characteristics of the breast images and the breast lesion regions of the breast images by adjusting the network structure and the weights among the network nodes in the learning and training.
In an alternative example, the breast image may include: raw breast images and/or pre-processed images.
The preprocessing image is a breast image generated after the original breast image is preprocessed.
The original mammary gland image is a mammary gland image directly obtained by detection means such as instrument detection and the like; the preprocessing image refers to a mammary gland image generated by optimizing the directly acquired mammary gland image by adopting an image processing technology so as to meet the subsequent requirement of mammary gland lesion detection.
In an optional example, referring to fig. 2, the step of preprocessing the original breast image may specifically include steps 201 and 203:
step 201, acquiring the original breast image.
In the embodiment of the present invention, the way of obtaining the original breast image may include obtaining the original breast image from a patient information database system pre-stored in a hospital, or obtaining the original breast image from the internet, or obtaining the original breast image through other ways, which is not limited in this embodiment of the present invention. The original breast image may be an original breast image with one or more lesion forms such as a lump, a calcific spot, bilateral asymmetry, and a structural distortion.
Step 202, determining the effective breast area range of the original breast image.
And step 203, generating a preprocessing image according to the effective mammary gland area range.
After the original breast image is obtained, image processing can be performed on the original breast image by adopting a plurality of image processing means, so that an image of an effective breast area range in the original breast image is determined, and an image (namely a preprocessed image) in the effective breast area range in the original breast image is obtained.
For example, a downsampling processing method may be used to perform image processing on the original breast image to obtain a downsampled breast image, and then further perform image processing on the downsampled breast image by using, for example, a maximum inter-class variance method to determine a breast contour, so as to determine a maximum range of an effective breast area according to the breast contour, determine the maximum range of the effective breast area as an effective breast area range, and further obtain an image (i.e., a preprocessed image) of the effective breast area range in the original breast image.
In an alternative example, the image feature may include: at least one of color features, texture features, shape features, spatial relationship features.
For example, the image features of the breast image include, but are not limited to, one or more of: color features, texture features (such as image gray level co-occurrence matrix and the like), shape features (such as contour features, region features and the like), and spatial relationship features (such as spatial position relationship, relative direction relationship and the like among a plurality of targets). The image characteristics of the mammary gland image can be single characteristics, and also comprise an input parameter one-dimensional or multi-dimensional array formed by extracting the characteristics according to a certain rule.
Therefore, the accuracy and reliability of determining the corresponding relation between the image characteristics and the breast lesion area can be improved through the image characteristics of the breast images in various forms.
In an optional example, the correspondence relationship may include: and (4) functional relation.
Optionally, the image feature is an input parameter of the functional relationship, and the breast lesion region is an output parameter of the functional relationship.
Therefore, the flexibility and convenience of determining the current breast lesion area can be improved through the corresponding relation in various forms.
In an optional example, referring to fig. 3, the step of establishing the correspondence between the image feature of the breast image and the breast lesion area may specifically include steps 301 and 303:
301, obtaining sample data for establishing a corresponding relationship between the image features and the breast lesion area.
In an optional example, referring to fig. 4, the step of acquiring sample data for establishing a corresponding relationship between the image feature and the breast lesion region may specifically include steps 401 and 403:
step 401, acquiring image characteristics of a plurality of different breast images and a breast lesion area.
In the embodiment of the present invention, the plurality of different breast images may be a plurality of different original breast images, a plurality of different preprocessed images, or a combination of the two; the way of acquiring the mammary gland image can be acquired from a patient information database system prestored in a hospital or from the internet, and the mammary gland image can be acquired according to actual requirements.
Therefore, the mammary gland image is acquired through multiple ways, the data volume of image features is increased, the learning capacity of the artificial neural network is improved, and the accuracy and the reliability of the determined corresponding relation are improved.
The method can acquire image characteristic parameters (such as color characteristics, texture characteristics, shape characteristics, spatial relationship characteristics and the like) and breast lesion region parameters (such as lesion degree, spatial position of lesion, size of lesion region and the like) of a plurality of different breast images, and determine sample data according to the acquired image characteristic parameters and the breast lesion region parameters.
Step 402, analyzing the image characteristics, and selecting data related to the breast lesion area as the image characteristics by combining with prestored lesion position information labeled by a doctor.
Step 403, using the data pair formed by the breast lesion area and the selected image features as sample data.
In the embodiment of the invention, after the breast image is collected, a doctor can judge whether a lesion state exists in the breast image according to the diagnosis experience, and if so, further determine lesion information (such as lesion type, lesion degree, lesion position, lesion size, size and form and the like), and pre-store the lesion information, so that the lesion position information of the breast image can be found according to the pre-stored lesion information, and the image characteristic parameters of the breast image are analyzed by combining the lesion position information, thereby selecting the image characteristic parameters which are related and influenced to the breast lesion region as input parameters, and taking the breast lesion region as output parameters, and further obtaining input and output parameter pairs (i.e. sample data).
For example, a sub-graph may be extracted according to a set ratio around the tumor according to the tumor position information marked by the doctor, and a data set image such as an original breast image, a preprocessed image, or the sub-graph marked with the tumor by the doctor is selected, and the data set image may be rotated (for example, 90, 180, 270 degrees, and the like), so that the rotated data set may be used as an input parameter, and a breast lesion region may be used as an output parameter, and the input and output parameter pair is sample data. Wherein the sample data may include training samples and test samples. The training sample may specifically refer to an image of a part of the training sample of the breast lesion detection method of the present invention shown in fig. 7 (a).
In the embodiment of the present invention, after the data set is determined, in order to conform to a data labeling format required by algorithm (such as convolutional neural network fast R-CNN) training and testing, the format of the data set may be consistent with the format of the VOC data set, including an image (such as Joint Photographic Experts Group Images) folder and a label (indications) folder. For example, a JPEGImages may be generated and stored in an image folder by framing a tumor with a rectangular frame and the like by using an OpenSource Computer Vision Library (OpenSource Computer Vision Library) dynamic Library which is packaged, and referring to an xml (Extensible markup language) format file of the framed tumor of fig. 7(b) and an information map of the xml format file of fig. 7(c), in particular, the xml format file of fig. 7(c) and a breast image of the framed tumor may be generated and stored in the images folder correspondingly.
Step 302, analyzing the characteristics and the rules of the image characteristics, and determining the network structure and the network parameters of the artificial neural network according to the characteristics and the rules.
In the embodiment of the invention, the basic structure of the network, the input and output node numbers of the network, the number of hidden nodes, the initial weight of the network and the like can be preliminarily determined according to the data characteristics and the embedded rules of the data characteristics, such as different color characteristics, texture characteristics, shape characteristics, spatial relationship characteristics and the like, which influence the image characteristics of the mammary gland image.
In an alternative example, the network structure may include: at least one of a Back Propagation (BP) Neural Network, a Neural Network (CNN), a Recurrent Neural Network (RNN), and a residual Neural Network.
Optionally, the CNN neural network may include: at least one of a Faster R-CNN neural network, a VGG (visual geometry Group) neural network.
In an optional example, the network parameter may include: at least one of the number of input nodes, the number of output nodes, the number of hidden layers, the number of hidden nodes, an initial weight and a bias value.
The artificial neural network used in the embodiment of the present invention is not limited to a certain network structure, and may be a combination of a plurality of network structures, and the specific scheme may be selected according to an actual application scenario.
For example, the Network structure of the breast lesion detection method can be constructed by a fast R-CNN framework, wherein the fast R-CNN framework can include a feature extraction layer, an RPN (Region pro-active Network), a ROI pooling (Region of interest pooling) layer, and a classification and identification layer. The characteristic extraction layer extracts the characteristic graph of the input mammary gland image as the input of the classification identification layer; the RPN network extracts candidate frames in a sliding window in a last layer of feature graph output by a feature extraction layer; the ROI pooling layer obtains a candidate frame feature map by using the feature map and the candidate frame; and the classification and identification layer performs classification and identification on the candidate frame feature map and performs regression on the candidate frame.
The feature extraction layer can use a VGG-16 model as a basic network, and the VGG16 model comprises 13 convolutional layers, 13 activation layers and 4 pooling layers. The convolution kernel size of all convolution layers can be 3 x 3, the step length is 1, and the filling is 1; the convolution kernel size for all pooled layers can be 2 x 2, step size 2, and no padding. The size of the feature map output by the feature extraction layer may be 1/16 of the size of the input breast image data.
The RPN network can obtain a candidate frame by setting a certain number of anchor points with different sizes and window width ratios for each pixel point in a feature map generated by a basic network and combining with the first frame regression. The size and the form of the lump marked by the doctor in the data set are counted, so that a very large anchor point is not needed in the actual parameter setting, for example, the number of the anchor points can be set to 6, that is, 6 anchor points with different sizes and window width ratios are set for each pixel point in the feature map generated by the basic network, and therefore, the detection speed is improved on the basis of ensuring the accuracy. The RPN network may comprise one convolutional layer with 256 convolutional kernels (with convolutional kernel size of 3 x 3, step size of 1), and two other convolutional layers with 12 and 24 convolutional kernels (with convolutional kernel size of 1 x 1, step size of 1), respectively. The convolution layer with 12 convolution kernels can be responsible for outputting the value of the candidate frame as the foreground or the background, and the convolution layer with 24 convolution kernels can be responsible for outputting the coordinates of the center point of the candidate frame and the width and height (namely X, Y, W and H) of the candidate frame for subsequent frame regression.
The ROI pooling layer can collect the input feature maps and candidate frames, and the feature maps and the candidate frames are integrated and then sent to a subsequent classification and identification layer for judging target categories (such as tumor or non-tumor types).
The classification and identification layer can identify the candidate frame feature map, and can perform more accurate frame regression on the candidate frame through the center point coordinates of the candidate frame and the width and height (namely X, Y, W, H) of the candidate frame, and finally realize target detection (such as lump detection) on the input mammary gland image.
In the embodiment of the invention, the automatic tumor detection can be carried out by using the Faster R-CNN framework, and the parameters of the basic network model are shared, so that the model parameters needing to be trained are reduced, the degree of network overfitting is effectively relieved, and the detection precision is improved.
In the prior art, for example, "method for automatically classifying breast molybdenum target images based on deep learning", the applicant: nanjing university of information engineering. The method trains an 8-layer convolutional neural network by establishing training samples of various sizes and corresponding labels, extracts the characteristics of the full connection layer of the network and inputs the network into an SVM (Support Vector Machine) classifier for classification, and obtains the prediction category of an input image block. However, the method needs to extract the features of the sample set by using the convolutional neural network and then input the features into the SVM classifier for classification to obtain a final prediction result, and as can be seen, the CNN network is only used as a feature extractor, and still needs to be combined with a specific classifier to obtain a prediction result, and the links of feature extraction and classification recognition are independent, so that the system integration level is not high. The embodiment of the invention can integrate the links of generating and selecting the candidate frame for realizing the detection function and the full-connection layer for realizing the identification function in the same convolutional neural network by using the Faster R-CNN framework, thereby changing the previous link processing process, simplifying the whole target detection and identification system and improving the system integration level.
Further, the prior art generally adopts the conventional manually designed features when extracting the features of the image, such as: the method is characterized by comprising the following steps of (1) deriving from HoG (Histogram of Oriented Gradient) features, SIFT (Scale-invariant feature transform) features, LBP (Local binary patterns) features and the like which describe natural images, wherein the features have low robustness and poor generalization capability and have large Local limitation when applied to medical images; and the manual design of the characteristics requires professional knowledge, and is time-consuming and labor-consuming. In order to overcome the defects in the prior art, the embodiment of the invention can realize an end-to-end automatic digital breast image lump detection system by fusing the VGG16 and the Faster R-CNN framework and utilizing the fused framework. The Faster R-CNN framework is suitable for breast images of various sizes, normalization processing on the size of the input breast image is not needed, and more breast image information can be reserved, so that the detection accuracy can be guaranteed. In addition, the Faster R-CNN network is evolved on the basis of the R-CNN and Fast R-CNN networks, the later two networks adopt a time-consuming selective search strategy when generating a candidate frame, so that a serious speed bottleneck exists, real-time detection is truly realized by the Faster R-CNN, the accuracy is ensured, and the processing speed is increased.
Step 303, training and testing the network structure and the network parameters by using the sample data, and determining the corresponding relation between the image characteristics and the breast lesion area.
In the embodiment of the invention, the training result and the test result can be obtained by training and testing the sample data. Selecting sample data of which the training result and the test result both meet set requirements, and determining the corresponding relation between the image characteristics and the breast lesion area according to the selected sample data.
In an optional example, referring to fig. 5, the step of training the network structure and the network parameters may specifically include steps 501 and 503:
step 501, selecting a part of data in the sample data as a training sample, inputting the image features in the training sample into the network structure, and performing training through an activation function of the network structure and the network parameters to obtain an actual training result.
Step 502, determining whether an actual training error between the actual training result and a corresponding breast lesion area in the training sample meets a set training error.
Step 503, when the actual training error meets the set training error, determining that the training of the network structure and the network parameters is completed.
In the embodiment of the invention, input data (namely training samples) can be imported, and the actual output result of the network can be calculated according to the activation function, the initialized weight and the bias. Taking the corresponding breast lesion area in the sample data as an expected training result. And judging whether the expected output result and the actual output result of the network meet the output precision requirement. And finishing the training if the expected output result and the actual output result of the network meet the precision requirement. The set training error may be set according to an actual situation, which is not limited in the embodiment of the present invention.
Therefore, the training sample is used for training the selected network structure and network parameters, so that more reliable network structures and network parameters can be obtained, and the accuracy and reliability of determining the corresponding relation between the image characteristics and the breast lesion area are improved.
For example, the network model may be trained using the idea of transfer learning. Specifically, a large data set ImageNet can be used for pre-training random initialization parameters in a VGG-16 network model, a pre-trained network model is obtained and used as a common part of a detection model, and the pre-trained network model is used as a feature extraction layer of the detection network model; then, on the basis, the training sample data can be used for retraining the RPN network layer and the classification recognition layer in the Faster R-CNN framework. For example, the RPN network portion may be trained first, wherein the network parameters may be loaded by the pre-trained network model according to the network layer names; then, a classification network can be trained independently by using a candidate region generated by the RPN, and then the RPN can be trained again, for example, parameters of a public part of a fixed network model are used, and only a unique part of the RPN is updated; finally, the output of the RPN network may be used to update the parameter portion of the classification recognition layer based on the parameters of the fixed network model common portion.
In an optional example, the step of testing the network structure and the network parameter, referring to fig. 6, may specifically include step 601 and step 603:
step 601, selecting another part of data in the sample data as a test sample, inputting the image features in the test sample into the trained network structure, and testing with the activation function and the trained network parameters to obtain an actual test result.
Step 602, determining whether an actual test error between the actual test result and the corresponding breast lesion area in the test sample meets a set test error.
Step 603, determining that the test on the network structure and the network parameters is completed when the actual test error meets the set test error.
In the embodiment of the invention, after the network training is finished, the network is positively tested by using the test sample. And if the test error meets the requirement, finishing the network training test. The set test error may be set according to an actual situation, which is not limited in the embodiment of the present invention.
Therefore, the reliability of the network structure and the network parameters is further verified by using the test sample for testing the network structure and the network parameters obtained by training.
For example, a part of the preprocessed images is selected from the sample data as a test sample, and is input into a trained Faster R-CNN frame, so as to obtain a detection result with confidence corresponding to each test image, and complete the detection and identification of the tumor of the input breast image (the detection results are shown in fig. 8(a) and 8 (b)).
After the lump detection is completed, the number of lumps contained in the data set is counted, and the number of lumps detected is counted, and the detection rate and the average false positive rate of the lump detection algorithm are calculated. When the suspicious mass detected by the breast mass detection system is matched with the mass marked by the doctor, the suspicious mass is determined as a true positive mass; conversely, when a detected suspicious mass is not consistent with a physician-marked mass, the suspicious mass is considered a false positive mass. The detection rate and the average false positive rate are defined as follows (statistical results are shown in table 1 below):
the detection rate is the number of detected true positive masses/the number of masses contained in the data set;
the average false positive rate is the number of detected false positive masses/number of breast images in the data set.
TABLE 1
Faster R-CNN
Detection rate 0.9275(320/345)
Mean false positive rate 1.3115(560/427)
As can be seen from Table 1, the network obtained by fusing the VGG16 and the Faster R-CNN framework can achieve higher breast mass detection accuracy.
And 102, acquiring the current image characteristics of the current mammary gland image.
And 103, determining the current breast lesion area corresponding to the current image characteristic according to the corresponding relation.
In the embodiment of the invention, the current breast lesion (such as tumor) area of the current breast image can be identified in real time through the corresponding relation.
Therefore, the current breast lesion (such as a tumor) area of the current breast image can be effectively identified according to the current image characteristics based on the corresponding relation, so that an accurate identification basis is provided for the breast tumor, and the identification result is good in accuracy.
Optionally, the step of determining a current breast lesion region corresponding to the current image feature includes: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area.
Optionally, the step of determining a current breast lesion region corresponding to the current image feature further includes: and when the corresponding relation comprises a functional relation, inputting the current image characteristics into the functional relation, and determining the output parameter of the functional relation as the current breast lesion area.
Therefore, the current breast tumor area is determined according to the current image characteristics based on the corresponding relation or the functional relation, the determination mode is simple and convenient, and the reliability of the determination result is high.
In the embodiment of the invention, the self-learning capability of the artificial neural network can be utilized to establish the corresponding relation between the image characteristics of the mammary gland image and the mammary gland lesion area; then obtaining the current image characteristics of the current mammary gland image; determining a current breast lesion area corresponding to the current image characteristic according to the corresponding relation; wherein the step of determining a current breast lesion region corresponding to the current image feature comprises: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area. The method can combine a plurality of image characteristics of the breast image and establish the corresponding relation between the image characteristics and the breast lesion area to detect the breast lesion area, so that the accuracy of judging the breast lesion can be improved, and the user experience effect is improved.
In addition, the method for detecting the lesion area of the mammary gland image can be established based on a Faster R-CNN framework, and the method can be used for preprocessing the mammary gland image to obtain a preprocessed image; establishing a data set in a VOC data set format according to the pathological change information marked by the doctor; then combining the idea of transfer learning, transferring the weight of the VGG-16 model based on ImageNet data set learning as the weight of a feature extraction layer in a Faster R-CNN frame, and training a sample to retrain the Faster R-CNN frame on the basis of the weight; and detecting and identifying the lesion area of the input current mammary gland image by adopting a retrained Faster R-CNN frame. According to the embodiment of the invention, the step of extracting, detecting and identifying characteristics can be integrated by using the Faster R-CNN framework, the process of detecting and identifying the lesion region of the end-to-end automatic digital mammary image is realized, and the parameters of the VGG-16 basic network model are shared, so that the network parameters in the trained network model are reduced, the degree of network overfitting is effectively relieved, and the detection precision is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 9, a block diagram of a structure of an embodiment of a breast lesion detection apparatus of the present invention is shown, and may specifically include modules 901 and 903:
the relationship establishing module 901 is configured to establish a corresponding relationship between image features of the breast image and a breast lesion region by using a self-learning capability of the artificial neural network.
An obtaining module 902, configured to obtain a current image feature of a current breast image.
A determining module 903, configured to determine, according to the corresponding relationship, a current breast lesion area corresponding to the current image feature.
In an alternative example, the breast image includes: raw breast images and/or pre-processed images.
The preprocessing image is a breast image generated after the original breast image is preprocessed.
In an alternative example, the image feature includes: at least one of color features, texture features, shape features, spatial relationship features.
In an optional example, the correspondence includes: and (4) functional relation.
Optionally, the image feature is an input parameter of the functional relationship, and the breast lesion region is an output parameter of the functional relationship.
Optionally, the determining module 903 includes: and the lesion determining submodule is used for determining the breast lesion area corresponding to the image feature which is the same as the current image feature in the corresponding relation as the current breast lesion area.
Optionally, the determining module 903 further includes: and the output determining submodule is used for inputting the current image characteristics into the functional relation when the corresponding relation comprises the functional relation, and determining the output parameter of the functional relation as the current breast lesion area.
In an optional example, the breast lesion detection apparatus further includes a preprocessing module, which may specifically include the following sub-modules:
and the original image acquisition sub-module is used for acquiring the original mammary gland image.
And the mammary gland determining submodule is used for determining the effective mammary gland area range of the original mammary gland image.
And the image generation submodule is used for generating a preprocessing image according to the effective mammary gland area range.
In an optional example, the relationship establishing module 901 may specifically include the following sub-modules:
and the sample acquisition submodule is used for acquiring sample data for establishing the corresponding relation between the image characteristics and the breast lesion area.
And the network determining submodule is used for analyzing the characteristics and the rules of the image characteristics and determining the network structure and the network parameters of the artificial neural network according to the characteristics and the rules.
And the relation determining submodule is used for training and testing the network structure and the network parameters by using the sample data, and determining the corresponding relation between the image characteristics and the breast lesion area.
In one optional example, the network architecture comprises: at least one of a BP neural network, a CNN neural network, an RNN neural network, and a residual neural network.
Optionally, the CNN neural network includes: at least one of a Faster R-CNN neural network and a VGG neural network.
In an optional example, the network parameter includes: at least one of the number of input nodes, the number of output nodes, the number of hidden layers, the number of hidden nodes, an initial weight and a bias value.
In an optional example, the sample obtaining sub-module may specifically include the following units:
an acquisition unit for acquiring image features and breast lesion regions of a plurality of different breast images.
And the selecting unit is used for analyzing the image characteristics and selecting data related to the breast lesion area as the image characteristics by combining prestored lesion position information labeled by a doctor.
And the sample determining unit is used for taking the data pair formed by the breast lesion area and the selected image characteristics as sample data.
In one optional example, the relationship determination submodule may include a training unit and a test unit.
In an optional example, the training unit may specifically include the following sub-units:
and the training result acquisition subunit is used for selecting a part of data in the sample data as a training sample, inputting the image characteristics in the training sample into the network structure, and training through an activation function of the network structure and the network parameters to obtain an actual training result.
And the training error determining subunit is used for determining whether the actual training error between the actual training result and the corresponding breast lesion area in the training sample meets a set training error.
A training completion determining subunit, configured to determine that the training of the network structure and the network parameters is completed when the actual training error satisfies the set training error.
In an optional example, the test unit may specifically include the following sub-units:
and the test result acquisition subunit is used for selecting another part of data in the sample data as a test sample, inputting the image characteristics in the test sample into the trained network structure, and testing by using the activation function and the trained network parameters to obtain an actual test result.
And the test error determining subunit is used for determining whether the actual test error between the actual test result and the corresponding breast lesion area in the test sample meets a set test error.
A test completion determining subunit, configured to determine that the test on the network structure and the network parameter is completed when the actual test error satisfies the set test error.
In an optional example, the training unit may further include the following sub-units:
and the parameter updating subunit is used for updating the network parameters through an error energy function of the network structure when the actual training error does not meet the set training error.
A first retraining subunit, configured to retrain through the activation function of the network structure and the updated network parameter until an actual training error after retraining meets the set training error.
In an optional example, the test unit may further include the following sub-units:
and the second retraining subunit retrains the network structure and the network parameters when the actual test error does not meet the set test error until the retrained actual test error meets the set test error.
In one optional example, the apparatus further comprises at least one of:
the identification module is used for determining whether the current breast lesion area reaches a set lesion degree; and when the current breast lesion area reaches the lesion degree, performing interval identification on the current breast lesion area in the current breast image corresponding to the current image characteristic.
And the display output module is used for displaying and/or outputting at least one of the current breast image, the current image characteristic, the current breast lesion area and the interval identification.
And the maintenance module is used for performing at least one maintenance operation of updating, correcting and relearning the corresponding relation when a verification result that the current breast lesion area is not in accordance with the actual breast lesion area and/or the corresponding relation does not have the image characteristics which are the same as the current image characteristics is received.
In the embodiment of the invention, the self-learning capability of the artificial neural network can be utilized to establish the corresponding relation between the image characteristics of the mammary gland image and the mammary gland lesion area; then obtaining the current image characteristics of the current mammary gland image; determining a current breast lesion area corresponding to the current image characteristic according to the corresponding relation; wherein the step of determining a current breast lesion region corresponding to the current image feature comprises: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area. The method can combine a plurality of image characteristics of the breast image and establish the corresponding relation between the image characteristics and the breast lesion area to detect the breast lesion area, so that the accuracy of judging the breast lesion can be improved, and the user experience effect is improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Referring to fig. 10, a schematic structural diagram of a computer device for implementing the breast lesion detection method of the present invention is shown, which may specifically include the following steps:
the computer device 12 described above is embodied in the form of a general purpose computing device, and the components of the computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus 18 structures, including a memory bus 18 or memory controller, a peripheral bus 18, an accelerated graphics port, and a processor or local bus 18 using any of a variety of bus 18 architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus 18, micro-channel architecture (MAC) bus 18, enhanced ISA bus 18, audio Video Electronics Standards Association (VESA) local bus 18, and Peripheral Component Interconnect (PCI) bus 18.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (commonly referred to as "hard drives"). Although not shown in FIG. 10, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. The memory may include at least one program product having a set (e.g., at least one) of program modules 42, with the program modules 42 configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules 42, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, camera, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN)), a Wide Area Network (WAN), and/or a public network (e.g., the Internet) via network adapter 20. As shown, the network adapter 20 communicates with the other modules of the computer device 12 via the bus 18. It should be appreciated that although not shown in FIG. 10, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units 16, external disk drive arrays, RAID systems, tape drives, and data backup storage systems 34, etc.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement the breast lesion detection method provided by the embodiment of the present invention.
That is, the processing unit 16 implements, when executing the program,: establishing a corresponding relation between the image characteristics of the mammary gland image and the mammary gland lesion area by utilizing the self-learning capability of the artificial neural network; acquiring current image characteristics of a current breast image; determining a current breast lesion area corresponding to the current image characteristic according to the corresponding relation; the step of determining a current breast lesion region corresponding to the current image feature includes: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area.
In an embodiment of the present invention, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the breast lesion detection method as provided in all embodiments of the present application:
that is, the program when executed by the processor implements: establishing a corresponding relation between the image characteristics of the mammary gland image and the mammary gland lesion area by utilizing the self-learning capability of the artificial neural network; acquiring current image characteristics of a current breast image; determining a current breast lesion area corresponding to the current image characteristic according to the corresponding relation; the step of determining a current breast lesion region corresponding to the current image feature includes: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer-readable storage medium or a computer-readable signal medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPOM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The breast lesion detection method and the breast lesion detection device provided by the invention are described in detail above, and the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for detecting breast lesions, comprising:
establishing a corresponding relation between the image characteristics of the mammary gland image and the mammary gland lesion area by utilizing the self-learning capability of the artificial neural network;
acquiring current image characteristics of a current breast image;
determining a current breast lesion area corresponding to the current image characteristic according to the corresponding relation;
the step of determining a current breast lesion region corresponding to the current image feature includes: and determining the breast lesion area corresponding to the image feature with the same corresponding relation as the current image feature as the current breast lesion area.
2. The method of claim 1, wherein the breast image comprises: raw breast images and/or pre-processed images;
the preprocessing image is a mammary gland image generated after preprocessing the original mammary gland image; and/or the presence of a gas in the gas,
the image features include: at least one of color features, texture features, shape features, spatial relationship features; and/or the presence of a gas in the gas,
the corresponding relation comprises: a functional relationship;
the image characteristics are input parameters of the functional relationship, and the breast lesion area is output parameters of the functional relationship;
the step of determining the current breast lesion region corresponding to the current image feature further includes:
and when the corresponding relation comprises a functional relation, inputting the current image characteristics into the functional relation, and determining the output parameter of the functional relation as the current breast lesion area.
3. The method according to claim 2, wherein the step of preprocessing the raw breast image comprises:
acquiring the original breast image;
determining the effective mammary gland area range of the original mammary gland image;
and generating a preprocessing image according to the effective mammary gland area range.
4. The method according to any one of claims 1 to 3, wherein the step of establishing correspondence between image features of the breast image and the breast lesion region comprises:
acquiring sample data for establishing a corresponding relation between the image characteristics and the breast lesion area;
analyzing the characteristics and the rules of the image characteristics, and determining the network structure and the network parameters of the artificial neural network according to the characteristics and the rules;
training and testing the network structure and the network parameters by using the sample data, and determining the corresponding relation between the image characteristics and the breast lesion area.
5. The method of claim 4, wherein the step of obtaining sample data for establishing correspondence between the image features and the breast lesion region comprises:
acquiring image characteristics and breast lesion areas of a plurality of different breast images;
analyzing the image characteristics, and selecting data related to the breast lesion area as the image characteristics by combining prestored lesion position information labeled by a doctor;
and taking the data pair formed by the breast lesion area and the selected image characteristics as sample data.
6. The method of claim 5, wherein the network fabric comprises: at least one of a BP neural network, a CNN neural network, an RNN neural network, and a residual error neural network; wherein the CNN neural network comprises: at least one of a Faster R-CNN neural network, a VGG neural network; and/or the presence of a gas in the gas,
the network parameters comprise: at least one of the number of input nodes, the number of output nodes, the number of hidden layers, the number of hidden nodes, an initial weight and a bias value.
7. The method of claim 4, wherein the step of training the network structure and the network parameters comprises:
selecting a part of data in the sample data as a training sample, inputting the image characteristics in the training sample into the network structure, and training through an activation function of the network structure and the network parameters to obtain an actual training result;
determining whether an actual training error between the actual training result and a corresponding breast lesion area in the training sample meets a set training error;
determining that the training of the network structure and the network parameters is completed when the actual training error satisfies the set training error; and/or the presence of a gas in the gas,
the step of testing the network structure and the network parameters includes:
selecting another part of data in the sample data as a test sample, inputting the image characteristics in the test sample into the trained network structure, and testing by using the activation function and the trained network parameters to obtain an actual test result;
determining whether an actual test error between the actual test result and a corresponding breast lesion area in the test sample satisfies a set test error;
and when the actual test error meets the set test error, determining that the test on the network structure and the network parameters is finished.
8. A breast lesion detection device, comprising:
the relationship establishing module is used for establishing the corresponding relationship between the image characteristics of the mammary gland image and the mammary gland lesion area by utilizing the self-learning capability of the artificial neural network;
the acquisition module is used for acquiring the current image characteristics of the current mammary gland image;
the determining module is used for determining the current breast lesion area corresponding to the current image characteristic according to the corresponding relation;
the determining module includes: and the lesion determining submodule is used for determining the breast lesion area corresponding to the image feature which is the same as the current image feature in the corresponding relation as the current breast lesion area.
9. A terminal, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the terminal to perform one or more breast lesion detection methods of claims 1-7.
10. A computer-readable storage medium storing a computer program for causing a processor to execute the breast lesion detection method according to any one of claims 1 to 7.
CN201911201347.1A 2019-11-29 2019-11-29 Method and device for detecting breast lesions Pending CN111127400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911201347.1A CN111127400A (en) 2019-11-29 2019-11-29 Method and device for detecting breast lesions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911201347.1A CN111127400A (en) 2019-11-29 2019-11-29 Method and device for detecting breast lesions

Publications (1)

Publication Number Publication Date
CN111127400A true CN111127400A (en) 2020-05-08

Family

ID=70497189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911201347.1A Pending CN111127400A (en) 2019-11-29 2019-11-29 Method and device for detecting breast lesions

Country Status (1)

Country Link
CN (1) CN111127400A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640126A (en) * 2020-05-29 2020-09-08 成都金盘电子科大多媒体技术有限公司 Artificial intelligence diagnosis auxiliary method based on medical image
US11587231B2 (en) 2020-11-24 2023-02-21 Jiangsu University Comprehensive detection device and method for cancerous region
CN115836841A (en) * 2022-11-23 2023-03-24 深兰自动驾驶研究院(山东)有限公司 Mammary gland monitoring method, device and computer readable storage medium
CN118039087A (en) * 2024-04-15 2024-05-14 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) Breast cancer prognosis data processing method and system based on multidimensional information

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102878A1 (en) * 2017-09-30 2019-04-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for analyzing medical image
CN109635835A (en) * 2018-11-08 2019-04-16 深圳蓝韵医学影像有限公司 A kind of breast lesion method for detecting area based on deep learning and transfer learning
CN109949297A (en) * 2019-03-20 2019-06-28 天津工业大学 Pulmonary nodule detection method based on Reception and Faster R-CNN
CN110033042A (en) * 2019-04-15 2019-07-19 青岛大学 A kind of carcinoma of the rectum ring week incisxal edge MRI image automatic identifying method and system based on deep neural network
CN110210463A (en) * 2019-07-03 2019-09-06 中国人民解放军海军航空大学 Radar target image detecting method based on Precise ROI-Faster R-CNN
CN110321815A (en) * 2019-06-18 2019-10-11 中国计量大学 A kind of crack on road recognition methods based on deep learning
CN110400298A (en) * 2019-07-23 2019-11-01 中山大学 Detection method, device, equipment and the medium of heart clinical indices
CN110414607A (en) * 2019-07-31 2019-11-05 中山大学 Classification method, device, equipment and the medium of capsule endoscope image
CN110517249A (en) * 2019-08-27 2019-11-29 中山大学 Imaging method, device, equipment and the medium of ultrasonic elastic image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102878A1 (en) * 2017-09-30 2019-04-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for analyzing medical image
CN109635835A (en) * 2018-11-08 2019-04-16 深圳蓝韵医学影像有限公司 A kind of breast lesion method for detecting area based on deep learning and transfer learning
CN109949297A (en) * 2019-03-20 2019-06-28 天津工业大学 Pulmonary nodule detection method based on Reception and Faster R-CNN
CN110033042A (en) * 2019-04-15 2019-07-19 青岛大学 A kind of carcinoma of the rectum ring week incisxal edge MRI image automatic identifying method and system based on deep neural network
CN110321815A (en) * 2019-06-18 2019-10-11 中国计量大学 A kind of crack on road recognition methods based on deep learning
CN110210463A (en) * 2019-07-03 2019-09-06 中国人民解放军海军航空大学 Radar target image detecting method based on Precise ROI-Faster R-CNN
CN110400298A (en) * 2019-07-23 2019-11-01 中山大学 Detection method, device, equipment and the medium of heart clinical indices
CN110414607A (en) * 2019-07-31 2019-11-05 中山大学 Classification method, device, equipment and the medium of capsule endoscope image
CN110517249A (en) * 2019-08-27 2019-11-29 中山大学 Imaging method, device, equipment and the medium of ultrasonic elastic image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘海东;杨小渝;朱林忠;: "基于生成对抗网络的乳腺癌病理图像可疑区域标记", no. 06 *
林泽慧等: "基于更快速的区域卷积神经网络的胎儿头围超声图像质量控制", 《中国生物医学工程学报》, vol. 38, no. 4, 31 August 2019 (2019-08-31) *
高鑫等: "基于可变卷积神经网络的遥感影像密集区域车辆检测方法", 《电子与信息学报》, vol. 40, no. 12, 31 December 2018 (2018-12-31) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640126A (en) * 2020-05-29 2020-09-08 成都金盘电子科大多媒体技术有限公司 Artificial intelligence diagnosis auxiliary method based on medical image
CN111640126B (en) * 2020-05-29 2023-08-22 成都金盘电子科大多媒体技术有限公司 Artificial intelligent diagnosis auxiliary method based on medical image
US11587231B2 (en) 2020-11-24 2023-02-21 Jiangsu University Comprehensive detection device and method for cancerous region
CN115836841A (en) * 2022-11-23 2023-03-24 深兰自动驾驶研究院(山东)有限公司 Mammary gland monitoring method, device and computer readable storage medium
CN118039087A (en) * 2024-04-15 2024-05-14 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) Breast cancer prognosis data processing method and system based on multidimensional information

Similar Documents

Publication Publication Date Title
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
US11055571B2 (en) Information processing device, recording medium recording information processing program, and information processing method
Tong et al. Salient object detection via bootstrap learning
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
Kisilev et al. Medical image description using multi-task-loss CNN
CN111127400A (en) Method and device for detecting breast lesions
Zhang et al. SHA-MTL: soft and hard attention multi-task learning for automated breast cancer ultrasound image segmentation and classification
CN107993221B (en) Automatic identification method for vulnerable plaque of cardiovascular Optical Coherence Tomography (OCT) image
WO2020087838A1 (en) Blood vessel wall plaque recognition device, system and method, and storage medium
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN112085714B (en) Pulmonary nodule detection method, model training method, device, equipment and medium
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN111967464B (en) Weak supervision target positioning method based on deep learning
CN114998220B (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
CN112365497A (en) High-speed target detection method and system based on Trident Net and Cascade-RCNN structures
CN110246579B (en) Pathological diagnosis method and device
CN110705621A (en) Food image identification method and system based on DCNN and food calorie calculation method
CN112330624A (en) Medical image processing method and device
Zhang et al. Saliency detection via extreme learning machine
US20200175324A1 (en) Segmentation of target areas in images
CN110472673B (en) Parameter adjustment method, fundus image processing device, fundus image processing medium and fundus image processing apparatus
CN114638800A (en) Improved Faster-RCNN-based head shadow mark point positioning method
CN112991281B (en) Visual detection method, system, electronic equipment and medium
CN113177554B (en) Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN111325282B (en) Mammary gland X-ray image identification method and device adapting to multiple models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.103, baguang District Service Center, No.2 BaiShaWan Road, baguang community, Kuiyong street, Dapeng New District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Lanying Medical Technology Co.,Ltd.

Address before: 518000 1st floor, building B, jingchengda Industrial Park, Keji 4th Road, Langxin community, Shiyan street, Bao'an District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN LANYUN MEDICAL IMAGE CO.,LTD.