CN108665456B - Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence - Google Patents

Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence Download PDF

Info

Publication number
CN108665456B
CN108665456B CN201810463602.9A CN201810463602A CN108665456B CN 108665456 B CN108665456 B CN 108665456B CN 201810463602 A CN201810463602 A CN 201810463602A CN 108665456 B CN108665456 B CN 108665456B
Authority
CN
China
Prior art keywords
focus
region
picture
area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810463602.9A
Other languages
Chinese (zh)
Other versions
CN108665456A (en
Inventor
周振忠
周龙灏
周龙瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shangyiwang Information Technology Co ltd
Original Assignee
Guangzhou Shangyiwang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shangyiwang Information Technology Co ltd filed Critical Guangzhou Shangyiwang Information Technology Co ltd
Priority to CN201810463602.9A priority Critical patent/CN108665456B/en
Publication of CN108665456A publication Critical patent/CN108665456A/en
Application granted granted Critical
Publication of CN108665456B publication Critical patent/CN108665456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The embodiment of the invention provides a breast ultrasound lesion area real-time labeling method based on artificial intelligence. The method comprises the following steps: dividing the breast ultrasound image video into image sets by taking frames as units according to the sequence of the timestamps; sequentially detecting the picture sets according to a focus region detection grading model, determining the BI-RADS type grade, and determining a focus region at the same time; marking the contour line of each focus area in the picture, wherein the type of the contour line is related to the BI-RADS type level; and recombining the frame pictures into the video according to the sequence of the time stamps. The embodiment of the invention also provides a method for establishing the lesion area detection hierarchical model, which is used for establishing the lesion area detection hierarchical model in the method for marking the lesion area in real time. The embodiment of the invention also provides a system for real-time marking of the breast ultrasound lesion area based on artificial intelligence. The embodiment of the invention enhances the focus recognition capability, reduces the misdiagnosis rate and assists doctors to put forward more accurate suggestions.

Description

Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence
Technical Field
The invention relates to the field of artificial intelligence and ultrasonic medical image processing, in particular to a method and a system for real-time marking of a breast ultrasonic lesion area based on artificial intelligence and a method and a system for establishing a lesion area detection hierarchical model.
Background
The breast cancer is one of the most advanced malignant tumors of women, and the incidence rate of the breast cancer in China tends to rise year by year and to be younger in recent years, which can be said to be very severe. In terms of treatment, clinical diagnosis of breast cancer is common in imaging, and for example, breast ultrasound and molybdenum target CR have good diagnostic effects. Imaging identification is also currently the main means and method of examining breast cancer. The molybdenum palladium has the defects of high price, radiation, unsuitability for the breast body structure of Chinese women, large volume and the like, and is not suitable for the application of large-scale breast cancer screening of Chinese women; the ultrasound has the advantages of no wound, no pain, no radiation, small volume, portability, low price, suitability for Chinese female breast structures and the like, and becomes the preferred imaging equipment for breast cancer screening.
The diagnosis of the ultrasonic breast cancer mainly depends on the manual judgment of ultrasonic imaging doctors which are strictly trained for many years, but because the number of the ultrasonic imaging doctors is extremely limited, the diagnosis of various diseases which are burdensome on shoulders is required in daily life, and the requirement of a huge breast cancer screening task cannot be met. In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art:
because the number of ultrasonic imaging doctors is extremely limited, the requirements of a huge number of breast cancer screening tasks cannot be met, and the breast cancer ultrasonic images are complex and changeable, the focus area is often very tiny, and the focus area is extremely difficult to be observed and found by human eyes, so that missed diagnosis and misdiagnosis are easy to cause.
Disclosure of Invention
The method aims to solve the problems that in the prior art, breast cancer ultrasonic images are complex and changeable, lesion areas are often very tiny and are not easy to be observed and found by human eyes, missed diagnosis and misdiagnosis are easy to cause, and the number of ultrasonic imaging doctors is very limited and the huge breast cancer screening cannot be met.
In a first aspect, an embodiment of the present invention provides a method for establishing a lesion area detection classification model, including:
receiving an ultrasonic picture training set of a real focus region with focus mark symbol marks, and sequentially repairing partial characteristic textures covered by mark symbols in each ultrasonic picture in the ultrasonic picture training set according to the similarity of the characteristic textures of the ultrasonic pictures;
eliminating interference information in each repaired ultrasonic picture through a fast regional convolutional neural network so as to extract an effective tissue region of each ultrasonic picture;
training at least one focus grading sub-network through breast cancer BI-RADS grading based on each effective tissue area, and performing weighted optimization on each focus grading sub-network according to Bayesian weighting to determine a focus grading network;
establishing a classification neural network according to the inter-class and/or intra-class features of each effective tissue area, and when the focus area extracted by the classification neural network is not matched with the real focus area, adjusting the feature distance between the classes and/or within the classes through a loss function to optimize the classification neural network until the focus area extracted by the classification neural network is matched with the real focus area;
and establishing a focus region detection grading model based on the focus grading network and the classification neural network.
In a second aspect, an embodiment of the present invention provides a method for real-time labeling of a breast ultrasound lesion region based on artificial intelligence, including:
dividing the breast ultrasound image video into image sets by taking frames as units according to the sequence of the timestamps;
sequentially detecting the picture set according to a focus region detection grading model, determining the BI-RADS type grade, simultaneously detecting the effective tissue region of each picture in the picture set, and determining the focus region according to the inter-class and/or intra-class characteristics of the effective tissue region;
marking the contour line of each focus area in the picture, wherein the type of the contour line is related to the BI-RADS type level;
and recombining the frame pictures into the video according to the sequence of the timestamps.
In a third aspect, an embodiment of the present invention provides a system for establishing a lesion area detection hierarchical model, including:
the image restoration program module is used for receiving an ultrasonic image training set of a real focus area marked with a focus mark symbol, and sequentially restoring part of characteristic textures covered by the mark symbol in each ultrasonic image in the ultrasonic image training set according to the similarity of the characteristic textures of the ultrasonic images;
the effective tissue area extraction program module is used for eliminating interference information in each repaired ultrasonic picture through a fast regional convolution neural network so as to extract the effective tissue area of each ultrasonic picture;
the hierarchical network training program module is used for training at least one focus hierarchical sub-network through breast cancer BI-RADS (bidirectional reflectance spectroscopy) classification based on each effective tissue region, and performing weighted optimization on each focus hierarchical sub-network according to Bayesian weighting to determine a focus hierarchical network;
a classification neural network training program module, which is used for establishing a classification neural network according to the inter-class and/or intra-class characteristics of each effective tissue area, when the focus area extracted by the classification neural network is not matched with the real focus area, adjusting the characteristic distance between the classes and/or the intra-class through a loss function, and optimizing the classification neural network until the focus area extracted by the classification neural network is matched with the real focus area;
and the focus region detection hierarchical model establishing program module is used for establishing a focus region detection hierarchical model based on the focus hierarchical network and the classification neural network.
In a fourth aspect, an embodiment of the present invention provides a system for real-time labeling of a breast ultrasound lesion region based on artificial intelligence, including:
the video segmentation program module is used for dividing the breast ultrasound image video into image sets by taking frames as units according to the sequence of the timestamps;
a focus region and type level determination program module, configured to sequentially detect the image set according to a focus region detection hierarchical model, determine a BI-RADS type level, detect an effective tissue region of each image in the image set at the same time, and determine a focus region according to inter-class and/or intra-class features of the effective tissue region;
the contour line marking program module is used for marking contour lines of all focus areas in the picture, wherein the types of the contour lines are related to the BI-RADS type levels;
and the video synthesis program module is used for re-synthesizing the video of each frame of picture according to the sequence of the time stamps.
In a fifth aspect, an electronic device is provided, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the method for constructing a lesion area detection staging model according to any of the embodiments of the present invention.
In a sixth aspect, an electronic device is provided, which includes: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the method for artificial intelligence based real-time labeling of breast ultrasound lesion regions according to any of the embodiments of the present invention.
In a seventh aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the method for building a lesion area detection hierarchical model according to any embodiment of the present invention.
In an eighth aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, where the program is executed by a processor to implement the steps of the method for real-time labeling of a breast ultrasound lesion region based on artificial intelligence according to any embodiment of the present invention.
The embodiment of the invention realizes the real-time marking of the breast ultrasound focus region based on artificial intelligence, and reduces the misdiagnosis rate by constructing a neural network, repairing the image feature details covered by the marking symbols, increasing the inter-class distance of the extracted high-level features, improving the classification precision and enhancing the recognition capability of the focus. And then through decomposing and reconstructing the mammary gland ultrasonic image video, the focus region in the video is marked by using the constructed neural network, so that the diagnosis of a doctor is assisted, the workload of the doctor is greatly reduced, the working efficiency and the service quality of the doctor are improved, and the doctor is assisted to provide more accurate diagnosis and treatment suggestions for patients according to the contour line of the focus region.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, a brief description will be given below of the drawings required for the description of the embodiments or the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a method for establishing a lesion area detection classification model according to an embodiment of the present invention;
fig. 2 is an ultrasound image map covered by a label symbol according to a method for establishing a lesion area detection hierarchical model according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for real-time labeling of breast ultrasound lesion areas based on artificial intelligence according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a system for building a lesion area detection classification model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a system for real-time labeling of a breast ultrasound lesion region based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating a method for establishing a lesion area detection hierarchical model according to an embodiment of the present invention, including the following steps:
s11: receiving an ultrasonic picture training set of a real focus region with focus mark symbol marks, and sequentially repairing partial characteristic textures covered by mark symbols in each ultrasonic picture in the ultrasonic picture training set according to the similarity of the characteristic textures of the ultrasonic pictures;
s12: eliminating interference information in each repaired ultrasonic picture through a fast regional convolutional neural network so as to extract an effective tissue region of each ultrasonic picture;
s13: training at least one focus grading sub-network through breast cancer BI-RADS grading based on each effective tissue area, and performing weighted optimization on each focus grading sub-network according to Bayesian weighting to determine a focus grading network;
s14: establishing a classification neural network according to the inter-class and/or intra-class features of each effective tissue area, and when the focus area extracted by the classification neural network is not matched with the real focus area, adjusting the feature distance between the classes and/or within the classes through a loss function to optimize the classification neural network until the focus area extracted by the classification neural network is matched with the real focus area;
s15: and establishing a focus region detection grading model based on the focus grading network and the classification neural network.
In this embodiment, since a certain amount of data is required for training the model, a certain number of images with lesion markers need to be prepared in advance. When an image with a focus mark symbol of a small sample is faced, overfitting is reduced by expanding data, and the generalization capability of a trained model is enhanced. Based on the assumption that breast cancer lesions are not sensitive to rotation, 90 °, 180 °, 270 ° and mirror operations, size scaling set enhancement are performed on each picture, and meanwhile, the original data set is enlarged using PCA (principal component analysis) dithering, gaussian plus noise and other color transforms. The enhancement data may be performed after repairing a region masked by the label in each image in the image training set.
For step S11, the training model is essentially a picture of the lesion region of the breast cancer data set labeled by a professional sonographer. However, the breast cancer image labeled by the doctor has a labeling interference signal, and the effective feature extraction of the breast cancer focus region can be influenced by directly inputting the breast cancer image into training. As shown in fig. 2, the breast ultrasound image with a label symbol is obtained, so that the received image training set with the lesion label symbol must be repaired, so as to sequentially repair the feature details hidden by the label symbol in each image in the image training set according to the image texture similarity.
For step S12, the fast regional convolutional neural network is used to eliminate the interference information in the image repaired in step S11 and eliminate a large amount of black information borders contained in the breast ultrasound image, so that the interference signals are reduced and better small target detection performance can be obtained.
For this, the fast-RCNN network can be used to extract the ultrasound image effective tissue region in the images. In the feature extraction layer, convolutional neural networks such as ZF (ZF net, ZF network), VGGnet, Resnet and the like are adopted to extract front-end features so as to adapt to the requirements of efficiency and precision of different ultrasonic pictures; then extracting a target candidate box through an RPN (Region pro-social Network, a candidate Region generation Network); and combining the candidate areas of the effective area of the ultrasonic picture through a classification layer to output a final target frame.
When the effective area of the ultrasonic picture is determined, a method for extracting edge information by image gradient can also be adopted. The method comprises the steps of firstly, highlighting a local edge in an ultrasonic picture by using an edge enhancement operator, then defining the edge intensity of a pixel, and extracting a point set of the edge by setting a threshold value. Thereby detecting the edge of the effective area of the mammary gland ultrasonic image.
For step S13, the distance between the extracted feature classes between different breast cancer classes is small, which brings a certain difficulty to the BI-RADS classification of the breast cancer full map due to the small number of samples in each class. How to increase the inter-class distance of the extracted high-level features, improve the classification accuracy and enhance the generalization capability of the network is a big problem to be solved at present under the condition of ensuring the efficiency. Therefore, different deep learning network models are trained by adopting the multi-scale data set, the output predicted values of the networks are weighted through the weighted Bayesian network to obtain a final decision result, and therefore deep learning is applied to assist breast cancer BI-RADS classification.
For step S14, the breast cancer focus has the characteristics of variable size, small inter-class distance, large intra-class distance, and complex image features. The detection and localization of breast cancer lesions not only require the assurance of accuracy rate, but also meet the requirements of real-time performance. The target detection algorithm using the traditional shallow model has the problem of low efficiency, and meanwhile, the abstractness of the extracted features is poor, so that the high-level features of the breast cancer image cannot be extracted. Therefore, according to the advantage of automatic deep feature extraction of the convolutional neural network, the detection of the breast cancer ultrasonic picture focus region is carried out by adopting the SSD network to train the classification neural network. The network consists of two parts: the first characteristic extraction layer adopts a VGG-16 convolution network to extract front-end characteristics. In order to reduce the complexity of a feature extraction layer model, firstly, pre-training is carried out under a big data set ImageNet by using VGG-16, and fine tuning is carried out under a pre-training model by using a breast cancer ultrasonic image; second, Extra Feature Layers, which are used to extract candidate frames and output the final detected target frame through the last layer merging. In order to increase the translation and scale invariance of the extracted feature maps, the target bounding box is generated by the feature maps on different layers and is matched and compared with the real focus area marked by the label symbol, so that the classification neural network is continuously optimized, and the classification precision efficiency is improved.
For step S15, a lesion region detection classification model is built based on the lesion classification network and the classification neural network determined in step S13 and step S14.
According to the implementation method, the artificial intelligence deep learning method is adopted to construct the neural network, meanwhile, the image feature details covered by the label symbols are repaired, the inter-class distance of the extracted high-level features is increased, the classification precision is improved, the identification capability of the focus is enhanced, and therefore the misdiagnosis rate is reduced.
As an implementation manner, in this embodiment, sequentially repairing, according to the similarity of the feature textures of the ultrasound pictures, the feature textures of the parts, which are masked by the label, in each ultrasound picture in the ultrasound picture training set includes:
dividing pixel blocks with fixed sizes by taking each pixel point at the edge of the region of the partial characteristic texture covered by the mark symbol as a center;
determining the priority of repairing each pixel block according to the ratio of the area covered by the mark symbol in the pixel block, wherein the larger the ratio is, the lower the priority is;
and matching a repairing block used for repairing the pixel block with the highest priority in the ultrasonic picture according to the feature texture similarity so as to repair the region covered by the label symbol.
Continuously taking each pixel point of the area covered by the label symbol after repair as a center, dividing pixel blocks with fixed sizes, determining the priority of repair of each pixel block, matching the repair block used for repairing the pixel block with the highest priority in the ultrasonic picture according to the feature texture similarity for repair, and stopping repair until the area covered by the label symbol is repaired.
In this embodiment, a block-based image texture completion technique may be used to fill in missing blocks (the partial regions hidden by the reference symbols in fig. 2) of any size while repairing picture details. The method for repairing the feature details, which are covered by the mark symbol, in each image in the image training set sequentially according to the Criminisi image repairing algorithm by using the Criminisi image repairing algorithm, that is, the region covered by the "+" sign in fig. 2 needs to be repaired, includes:
sequentially selecting pixel points on the edge of the region covered by the label symbol, and then constructing a pixel block with the size of n multiplied by n by taking the pixel points as the center; determining the repaired priority of each pixel block according to the ratio of the area covered by the label symbol in the pixel block, wherein the higher the ratio is, the lower the repaired priority is;
and searching a sample block which is most similar to the pixel block to be repaired in a sound area (namely an area without a mark symbol) in the effective area image of the ultrasonic picture according to the image texture similarity, wherein the sample block is used for repairing the pixel block to be repaired with the highest priority so as to repair the area covered by the mark symbol, and circularly operating according to the step until the area covered by the mark symbol is completely repaired. In image inpainting algorithms, it is also possible to use, for example: repair methods such as mean _ shift, patch _ match, and pyramid.
The implementation method can be seen that by means of the combination of the image restoration algorithm and the lesion features, the image feature details covered by the label symbol can be better restored on the premise of not damaging the original image details.
Fig. 3 is a flowchart of a method for real-time labeling of a breast ultrasound lesion region based on artificial intelligence according to an embodiment of the present invention, including the following steps:
s21: dividing the breast ultrasound image video into image sets by taking frames as units according to the sequence of the timestamps;
s22: sequentially detecting the picture set according to a focus region detection grading model, determining the BI-RADS type grade, simultaneously detecting the effective tissue region of each picture in the picture set, and determining the focus region according to the inter-class and/or intra-class characteristics of the effective tissue region;
s23: marking the contour line of each focus area in the picture, wherein the type of the contour line is related to the BI-RADS type level;
s24: and recombining the frame pictures into the video according to the sequence of the timestamps.
In this embodiment, when the operator inspects the examinee through the ultrasound apparatus, the ultrasound apparatus displays the breast ultrasound image video of the inspected object during the inspection process, and performs real-time labeling of the breast cancer focal region by acquiring the breast ultrasound image video.
For step S21, dividing the acquired breast ultrasound image video into image sets by taking frames as units according to the sequence of the timestamps.
For step S22, performing category prediction on the breast ultrasound image picture overall image through a lesion area detection hierarchical model, and determining the BI-RADS type level to which the breast ultrasound image picture overall image belongs; among them, BI-RADS (diagnosis reporting and data system), the American society for radiology recommended "mammary gland image report and data System", is more standardized. The respective classification meanings thereof are as follows: level 0: recall is required in conjunction with other checks and then re-evaluation. Indicating that the information obtained by the examination may not be complete enough. Level 1: no abnormality was found. And 2, stage: regular follow-up (e.g., once a year) is recommended to account for benign changes. And 3, level: benign disease may occur, but requires a shorter follow-up period (e.g., once in 3-6 months). The proportion of this grade of malignancy is less than 2%. 4, level: if there is an abnormality, the possibility of malignant lesion cannot be completely excluded, and the biopsy is required to be clear. 4a level: the likelihood of malignancy is low. 4b level: the likelihood of malignancy is moderate. 4c level: the likelihood of malignancy is high. And 5, stage: highly suspected malignant lesions (almost recognized as malignant disease) require surgical resection biopsies. And 6, level: it has been pathologically confirmed as a malignant lesion. According to the BI-RADS classification of the breast ultrasound image picture, the method has positive guiding significance for assisting a doctor to guide the treatment of a patient.
Meanwhile, when the detected effective tissue area is a focus-free area, the breast cancer is completely healthy, but the characteristics of the breast cancer focus, the lymph node and the fat are similar, and the fat and the lymph node are mistakenly detected as the focus. Meanwhile, the breast cancer focus has the characteristics of variable size, irregular shape, small inter-class distance, large intra-class distance and complex image characteristics. The detection and localization of lesions not only requires a guarantee of accuracy, but also meets the requirements of real-time performance.
Therefore, the lesion area detection hierarchical model trained in the embodiment of fig. 1 is used, and during training, the inter-class distance of the network extraction features is increased and the intra-class distance of the network extraction features is reduced, so that the type of each suspicious lesion area in an ultrasonic image picture set is determined. Wherein the types of the target area include: fat region, lymph node region, focal region.
For step S23, an outline of each lesion area is labeled in the picture, wherein the type of the outline is related to the BI-RADS type level, for example, the outlines of different levels have different colors.
And step S24, recombining the frame pictures into video output according to the sequence of the time stamps, and displaying the video output on the screen of the computer workstation.
According to the implementation method, the focus area in the video is marked by decomposing and reconstructing the breast ultrasonic image video, so that the diagnosis of a doctor is assisted, the workload of the doctor is greatly reduced, the working efficiency and the service quality of the doctor are improved, and a more appropriate clinical treatment suggestion is provided for patients.
As an embodiment, in this embodiment, labeling the contour line of each lesion area in the picture includes:
denoising the focus region subjected to gray processing by guiding filtering, and correcting the focus region by a compensation function to remove partial noise;
applying Gamma enhancement processing to the corrected focus region, converting the focus region into a binary image, and processing the binary image through opening and closing operation;
and drawing the edge contour line of the processed binary image as the contour line of the focus area.
In the present embodiment, the suspicious lesion region identified in step S24 is a rectangular candidate frame used initially, but the shape of the lesion varies, the contour of the lesion is blurred, and it is necessary to perform fine segmentation.
The guide filtering is an edge preserving algorithm based on a local linear model, and guides a filtering process by using a guide image, defines any pixel point and part of adjacent pixel points in an image into a linear relation, respectively performs filtering processing, and finally accumulates all local filtering results to obtain a global filtering result so as to obtain an output image with a structure similar to that of an input image. The contour segmentation of the focus utilizes the good edge retentivity of the guide filter and is introduced to find the boundary of the focus area.
Firstly, carrying out picture gray processing on a focus area, then using guide filtering on the picture after the gray processing to keep the edge information of the picture, filtering partial noise, carrying out Gamma correction enhancement processing on the picture after filtering to compensate the information of the picture, converting the picture into a binary image through Ostu (Dajin algorithm), carrying out open-close mathematical morphological operation on the converted binary image, thereby more accurately segmenting the outline of the focus area, and then extracting the edge contour line for labeling.
According to the implementation method, the edge of the focus area is accurately marked, so that a doctor is assisted to more accurately determine the focus area, and the working efficiency and quality of the doctor are improved.
As shown in fig. 4, which is a schematic structural diagram of a system for establishing a lesion area detection hierarchical model according to an embodiment of the present invention, the technical solution of this embodiment may be applied to a method for establishing a lesion area detection hierarchical model of a device, and the system may execute the method for establishing a lesion area detection hierarchical model according to any of the embodiments described above and be configured in a terminal.
The system for establishing a lesion area detection hierarchical model provided by the embodiment comprises: the system comprises a picture repairing program module 11, an effective organization region extracting program module 12, a hierarchical network training program module 13, a classification neural network training program module 14 and a focus region detection hierarchical model establishing program module 15.
The image restoration program module 11 is configured to receive an ultrasound image training set of a real lesion area marked with a lesion mark symbol, and sequentially restore, according to similarity of feature textures of the ultrasound image, a portion of feature textures masked by the mark symbol in each ultrasound image in the ultrasound image training set; the effective tissue area extraction program module 12 is configured to eliminate interference information in each repaired ultrasound image through a fast area convolutional neural network, so as to extract an effective tissue area of each ultrasound image; the hierarchical network training program module 13 is configured to train at least one lesion classification subnetwork through breast cancer BI-RADS classification based on each effective tissue region, and perform weighted optimization on each lesion classification subnetwork according to bayesian weighting to determine a lesion classification network; the classification neural network training program module 14 is configured to establish a classification neural network according to the inter-class and/or intra-class features of each effective tissue region, and when the lesion region extracted by the classification neural network is not matched with the real lesion region, adjust the inter-class and/or intra-class feature distance by a loss function, and optimize the classification neural network until the lesion region extracted by the classification neural network is matched with the real lesion region; the lesion area detection hierarchical model building program module 15 is configured to build a lesion area detection hierarchical model based on the lesion hierarchical network and the classification neural network.
Further, the picture restoration program module is configured to:
dividing pixel blocks with fixed sizes by taking each pixel point at the edge of the region of the partial characteristic texture covered by the mark symbol as a center;
determining the priority of repairing each pixel block according to the ratio of the area covered by the mark symbol in the pixel block, wherein the larger the ratio is, the lower the priority is;
and matching a repairing block used for repairing the pixel block with the highest priority in the ultrasonic picture according to the feature texture similarity so as to repair the region covered by the label symbol.
Further, the system is also configured to:
continuously taking each pixel point of the area covered by the label symbol after repair as a center, dividing pixel blocks with fixed sizes, determining the priority of repair of each pixel block, matching the repair block used for repairing the pixel block with the highest priority in the ultrasonic picture according to the feature texture similarity for repair, and stopping repair until the area covered by the label symbol is repaired.
Fig. 5 is a schematic structural diagram of a system for real-time breast ultrasound lesion region labeling based on artificial intelligence according to an embodiment of the present invention, where the technical solution of this embodiment is applicable to a method for real-time breast ultrasound lesion region labeling based on artificial intelligence, and the system can execute the method for real-time breast ultrasound lesion region labeling based on artificial intelligence described in any of the above embodiments and is configured in a terminal.
The system for real-time labeling of a breast ultrasound lesion region based on artificial intelligence provided by the embodiment comprises: a video segmentation program module 21, a lesion region and type level determination program module 22, a contour labeling program module 23, and a video composition program module 24.
The video segmentation program module 21 is configured to divide the breast ultrasound image video into image sets in units of frames according to the sequence of timestamps; the lesion area and type level determination program module 22 is configured to sequentially detect the image sets according to a lesion area detection hierarchical model, determine a BI-RADS type level, detect an effective tissue area of each image in the image sets at the same time, and determine a lesion area according to inter-class and/or intra-class features of the effective tissue area; a contour line marking program module 23, configured to mark a contour line of each lesion area in the picture, where the type of the contour line is related to the BI-RADS type level; the video synthesis program module 24 is configured to resynthesize the video from the frames of pictures according to the sequence of the timestamps.
Further, wherein the contour line labeling program module is configured to:
denoising the focus region subjected to gray processing by guiding filtering, and correcting the focus region by a compensation function to remove partial noise;
applying Gamma enhancement processing to the corrected focus region, converting the focus region into a binary image, and processing the binary image through opening and closing operation;
and drawing the edge contour line of the processed binary image as the contour line of the focus area.
The embodiment of the invention also provides a nonvolatile computer storage medium, wherein the computer storage medium stores computer executable instructions which can execute the method for establishing the lesion area detection hierarchical model in any method embodiment;
as one embodiment, a non-volatile computer storage medium of the present invention stores computer-executable instructions configured to:
receiving an ultrasonic picture training set of a real focus region with focus mark symbol marks, and sequentially repairing partial characteristic textures covered by mark symbols in each ultrasonic picture in the ultrasonic picture training set according to the similarity of the characteristic textures of the ultrasonic pictures;
eliminating interference information in each repaired ultrasonic picture through a fast regional convolutional neural network so as to extract an effective tissue region of each ultrasonic picture;
training at least one focus grading sub-network through breast cancer BI-RADS grading based on each effective tissue area, and performing weighted optimization on each focus grading sub-network according to Bayesian weighting to determine a focus grading network;
establishing a classification neural network according to the inter-class and/or intra-class features of each effective tissue area, and when the focus area extracted by the classification neural network is not matched with the real focus area, adjusting the feature distance between the classes and/or within the classes through a loss function to optimize the classification neural network until the focus area extracted by the classification neural network is matched with the real focus area;
and establishing a focus region detection grading model based on the focus grading network and the classification neural network.
The embodiment of the invention also provides a nonvolatile computer storage medium, wherein the computer storage medium stores computer executable instructions which can execute the method for real-time marking of the breast ultrasound lesion region based on artificial intelligence in any method embodiment;
as one embodiment, a non-volatile computer storage medium of the present invention stores computer-executable instructions configured to:
dividing the breast ultrasound image video into image sets by taking frames as units according to the sequence of the timestamps;
sequentially detecting the picture set according to the lesion area detection grading model, determining the BI-RADS type grade, simultaneously detecting the effective tissue area of each picture in the picture set, and determining the lesion area according to the inter-class and/or intra-class characteristics of the effective tissue area;
marking the contour line of each focus area in the picture, wherein the type of the contour line is related to the BI-RADS type level;
and recombining the frame pictures into the video according to the sequence of the timestamps.
As a non-volatile computer readable storage medium, may be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions/modules corresponding to the methods of the software in embodiments of the present invention. One or more program instructions are stored in a non-transitory computer readable storage medium, and when executed by a processor, the method for establishing a lesion region detection hierarchical model and the method for real-time labeling of a breast ultrasound lesion region based on artificial intelligence in any of the above method embodiments are performed.
The non-volatile computer-readable storage medium may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by software of an embodiment of the present invention, and the like. Further, the non-volatile computer-readable storage medium may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the non-transitory computer readable storage medium optionally includes memory located remotely from the processor, which may be connected to embodiment software of the present invention over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
An embodiment of the present invention further provides an electronic device, which includes: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform the steps of the method for establishing a lesion area detection grading model and the method for real-time labeling of a breast ultrasound lesion area based on artificial intelligence according to any of the embodiments of the present invention.
The client of the embodiment of the present application exists in various forms, including but not limited to:
(1) the mobile communication device is characterized by having mobile communication function and mainly aims to provide voice and data communication. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) The image class equipment of hospital, this kind of equipment includes: an ultrasonic machine, etc.
(4) Other electronic devices with processing functions.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity (or operation) from another entity (or operation), without necessarily requiring or implying any actual such relationship or order between such entities or operations. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for training a lesion area detection grading model comprises the following steps:
receiving an ultrasonic picture training set of a real focus region with focus mark symbol marks, and sequentially repairing partial characteristic textures covered by mark symbols in each ultrasonic picture in the ultrasonic picture training set according to the similarity of the characteristic textures of the ultrasonic pictures;
eliminating interference information in each repaired ultrasonic picture through a fast regional convolutional neural network so as to extract an effective tissue region of each ultrasonic picture;
training at least one focus grading sub-network through breast cancer BI-RADS grading based on each effective tissue region, and performing weighted optimization on each focus grading sub-network according to Bayesian weighting to determine a focus grading network;
establishing a classification neural network according to the inter-class and/or intra-class features of each effective tissue region, and when the focus region extracted by the classification neural network is not matched with the real focus region, adjusting the feature distance between the classes and/or within the classes through a loss function to optimize the classification neural network until the focus region extracted by the classification neural network is matched with the real focus region;
and establishing a focus region detection grading model based on the focus grading network and the classification neural network.
2. The method according to claim 1, wherein sequentially repairing, according to the similarity of the feature textures of the ultrasound pictures, the part of the feature textures masked by the label symbols in each ultrasound picture in the ultrasound picture training set comprises:
dividing pixel blocks with fixed sizes by taking each pixel point at the edge of the region of the partial characteristic texture covered by the mark symbol as a center;
determining the priority of repairing each pixel block according to the ratio of the area covered by the mark symbol in the pixel block, wherein the larger the ratio is, the lower the priority is;
and matching a repairing block used for repairing the pixel block with the highest priority in the ultrasonic picture according to the feature texture similarity so as to repair the region covered by the label symbol.
3. The method of claim 2, wherein the method further comprises:
continuously taking each pixel point of the area covered by the label symbol after repair as a center, dividing pixel blocks with fixed sizes, determining the priority of repair of each pixel block, matching the repair block used for repairing the pixel block with the highest priority in the ultrasonic picture according to the feature texture similarity for repair, and stopping repair until the area covered by the label symbol is repaired.
4. A breast ultrasound lesion region real-time labeling method based on artificial intelligence comprises the following steps:
dividing the breast ultrasound image video into image sets by taking frames as units according to the sequence of the timestamps;
sequentially testing the picture set according to the lesion area detection grading model of any one of claims 1-3, determining a BI-RADS type grade, simultaneously testing the effective tissue area of each picture in the picture set, and determining a lesion area according to the inter-class and/or intra-class characteristics of the effective tissue area;
marking the contour line of each focus area in the picture, wherein the type of the contour line is related to the BI-RADS type level;
and recombining the frame pictures into the video according to the sequence of the timestamps.
5. The method of claim 4, wherein labeling an outline of each lesion area in the picture comprises:
denoising the focus region subjected to gray processing by guiding filtering, and correcting the focus region by a compensation function to remove partial noise;
applying Gamma enhancement processing to the corrected focus region, converting the focus region into a binary image, and processing the binary image through opening and closing operation;
and drawing the edge contour line of the processed binary image as the contour line of the focus area.
6. A system for establishing a lesion area detection grading model comprises:
the image restoration program module is used for receiving an ultrasonic image training set of a real focus area marked with a focus mark symbol, and sequentially restoring part of characteristic textures covered by the mark symbol in each ultrasonic image in the ultrasonic image training set according to the similarity of the characteristic textures of the ultrasonic images;
the effective tissue area extraction program module is used for eliminating interference information in each repaired ultrasonic picture through a fast regional convolution neural network so as to extract the effective tissue area of each ultrasonic picture;
the hierarchical network training program module is used for training at least one focus hierarchical sub-network through breast cancer BI-RADS (bidirectional reflectance spectroscopy) classification based on each effective tissue region, and performing weighted optimization on each focus hierarchical sub-network according to Bayesian weighting to determine a focus hierarchical network;
a classification neural network training program module, which is used for establishing a classification neural network according to the inter-class and/or intra-class characteristics of each effective tissue area, when the focus area extracted by the classification neural network is not matched with the real focus area, adjusting the characteristic distance between the classes and/or the intra-class through a loss function, and optimizing the classification neural network until the focus area extracted by the classification neural network is matched with the real focus area;
and the focus region detection hierarchical model establishing program module is used for establishing a focus region detection hierarchical model based on the focus hierarchical network and the classification neural network.
7. The system of claim 6, wherein the picture restoration program module is to:
dividing pixel blocks with fixed sizes by taking each pixel point at the edge of the region of the partial characteristic texture covered by the mark symbol as a center;
determining the priority of repairing each pixel block according to the ratio of the area covered by the mark symbol in the pixel block, wherein the larger the ratio is, the lower the priority is;
and matching a repairing block used for repairing the pixel block with the highest priority in the ultrasonic picture according to the feature texture similarity so as to repair the region covered by the label symbol.
8. The system of claim 7, wherein the system is further configured to:
continuously taking each pixel point of the area covered by the label symbol after repair as a center, dividing pixel blocks with fixed sizes, determining the priority of repair of each pixel block, matching the repair block used for repairing the pixel block with the highest priority in the ultrasonic picture according to the feature texture similarity for repair, and stopping repair until the area covered by the label symbol is repaired.
9. A breast ultrasound lesion area real-time labeling system based on artificial intelligence comprises:
the video segmentation program module is used for dividing the breast ultrasound image video into image sets by taking frames as units according to the sequence of the timestamps;
a lesion region and type level determination program module for sequentially detecting the image set according to the lesion region detection hierarchical model of any one of claims 1 to 3, determining a BI-RADS type level, simultaneously detecting an effective tissue region of each image in the image set, and determining a lesion region according to inter-class and/or intra-class features of the effective tissue region;
the contour line marking program module is used for marking contour lines of all focus areas in the picture, wherein the types of the contour lines are related to the BI-RADS type levels;
and the video synthesis program module is used for re-synthesizing the video of each frame of picture according to the sequence of the time stamps.
10. The system of claim 9, wherein the contour line labeling program module is to:
denoising the focus region subjected to gray processing by guiding filtering, and correcting the focus region by a compensation function to remove partial noise;
applying Gamma enhancement processing to the corrected focus region, converting the focus region into a binary image, and processing the binary image through opening and closing operation;
and drawing the edge contour line of the processed binary image as the contour line of the focus area.
CN201810463602.9A 2018-05-15 2018-05-15 Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence Active CN108665456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810463602.9A CN108665456B (en) 2018-05-15 2018-05-15 Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810463602.9A CN108665456B (en) 2018-05-15 2018-05-15 Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN108665456A CN108665456A (en) 2018-10-16
CN108665456B true CN108665456B (en) 2022-01-28

Family

ID=63778468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810463602.9A Active CN108665456B (en) 2018-05-15 2018-05-15 Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN108665456B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109637629A (en) * 2018-10-31 2019-04-16 泰格麦迪(北京)医疗科技有限公司 A kind of BI-RADS hierarchy model method for building up
CN115345819A (en) * 2018-11-15 2022-11-15 首都医科大学附属北京友谊医院 Gastric cancer image recognition system, device and application thereof
CN109616195A (en) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) The real-time assistant diagnosis system of mediastinum endoscopic ultrasonography image and method based on deep learning
CN109829889A (en) * 2018-12-27 2019-05-31 清影医疗科技(深圳)有限公司 A kind of ultrasound image processing method and its system, equipment, storage medium
CN110264443B (en) * 2019-05-20 2024-04-16 平安科技(深圳)有限公司 Fundus image lesion labeling method, device and medium based on feature visualization
CN110223287A (en) * 2019-06-13 2019-09-10 首都医科大学北京友谊医院 A method of early diagnosing mammary cancer rate can be improved
CN110223289A (en) * 2019-06-17 2019-09-10 上海联影医疗科技有限公司 A kind of image processing method and system
CN110232383B (en) * 2019-06-18 2021-07-02 湖南省华芯医疗器械有限公司 Focus image recognition method and focus image recognition system based on deep learning model
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110599476B (en) * 2019-09-12 2023-05-23 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
CN110613486B (en) * 2019-09-30 2022-04-22 深圳大学总医院 Method and device for detecting breast ultrasound image
CN111080580B (en) * 2019-11-29 2023-06-09 山东大学 Ultrasonic breast tumor rapid threshold segmentation method and system based on Zhongzhi set
CN111179227B (en) * 2019-12-16 2022-04-05 西北工业大学 Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics
CN111227864B (en) * 2020-01-12 2023-06-09 刘涛 Device for detecting focus by using ultrasonic image and computer vision
CN111383328B (en) * 2020-02-27 2022-05-20 西安交通大学 3D visualization method and system for breast cancer focus
CN111462049B (en) * 2020-03-09 2022-05-17 西南交通大学 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
CN111383236B (en) * 2020-04-24 2021-04-02 中国人民解放军总医院 Method, apparatus and computer-readable storage medium for labeling regions of interest
CN111724314A (en) * 2020-05-08 2020-09-29 天津大学 Method for detecting and removing special mark in medical image
CN111768366A (en) * 2020-05-20 2020-10-13 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging system, BI-RADS classification method and model training method
CN113781439B (en) * 2020-11-25 2022-07-29 北京医准智能科技有限公司 Ultrasonic video focus segmentation method and device
CN112634284B (en) * 2020-12-22 2022-03-25 上海体素信息科技有限公司 Weight map loss-based staged neural network CT organ segmentation method and system
CN112641466A (en) * 2020-12-31 2021-04-13 北京小白世纪网络科技有限公司 Ultrasonic artificial intelligence auxiliary diagnosis method and device
CN113344854A (en) * 2021-05-10 2021-09-03 深圳瀚维智能医疗科技有限公司 Breast ultrasound video-based focus detection method, device, equipment and medium
US11763428B2 (en) 2021-06-22 2023-09-19 Saudi Arabian Oil Company System and method for de-noising an ultrasonic scan image using a convolutional neural network
CN113793301B (en) * 2021-08-19 2023-07-21 首都医科大学附属北京同仁医院 Training method of fundus image analysis model based on dense convolution network model
CN113662573B (en) * 2021-09-10 2023-06-30 上海联影医疗科技股份有限公司 Mammary gland focus positioning method, device, computer equipment and storage medium
CN113855079A (en) * 2021-09-17 2021-12-31 上海仰和华健人工智能科技有限公司 Real-time detection and breast disease auxiliary analysis method based on breast ultrasonic image
CN114305503B (en) * 2021-12-09 2024-05-14 上海杏脉信息科技有限公司 Mammary gland disease follow-up system, medium and electronic equipment
CN114376615B (en) * 2022-03-04 2023-09-15 厦门大学附属中山医院 Mammary gland ultrasonic screening system and screening method based on artificial intelligence
CN115578394B (en) * 2022-12-09 2023-04-07 湖南省中医药研究院 Pneumonia image processing method based on asymmetric network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117481B (en) * 2011-03-17 2012-11-28 西安交通大学 Automatic digital repair method of damaged images
CN104252570A (en) * 2013-06-28 2014-12-31 上海联影医疗科技有限公司 Mass medical image data mining system and realization method thereof
KR102307356B1 (en) * 2014-12-11 2021-09-30 삼성전자주식회사 Apparatus and method for computer aided diagnosis
CN106339591B (en) * 2016-08-25 2019-04-02 汤一平 A kind of self-service healthy cloud service system of prevention breast cancer based on depth convolutional neural networks
CN107507139B (en) * 2017-07-28 2019-11-22 北京航空航天大学 The dual sparse image repair method of sample based on Facet directional derivative feature

Also Published As

Publication number Publication date
CN108665456A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
CN108665456B (en) Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence
CN110033456B (en) Medical image processing method, device, equipment and system
CN111428709B (en) Image processing method, device, computer equipment and storage medium
CN108464840B (en) Automatic detection method and system for breast lumps
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
CN108416360B (en) Cancer diagnosis system and method based on breast molybdenum target calcification features
Bai et al. Automatic segmentation of cervical region in colposcopic images using K-means
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
WO2019184851A1 (en) Image processing method and apparatus, and training method for neural network model
Rajathi et al. Varicose ulcer (C6) wound image tissue classification using multidimensional convolutional neural networks
CN111476794B (en) Cervical pathological tissue segmentation method based on UNET
CN112330613B (en) Evaluation method and system for cytopathology digital image quality
CN111062953A (en) Method for identifying parathyroid hyperplasia in ultrasonic image
Sornapudi et al. Automated cervical digitized histology whole-slide image analysis toolbox
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
WO2021139447A1 (en) Abnormal cervical cell detection apparatus and method
Yang et al. Endoscopic artefact detection and segmentation with deep convolutional neural network
CN111062909A (en) Method and equipment for judging benign and malignant breast tumor
Parraga et al. A review of image-based deep learning algorithms for cervical cancer screening
CN113793316B (en) Ultrasonic scanning area extraction method, device, equipment and storage medium
CN111275719B (en) Calcification false positive recognition method, device, terminal and medium and model training method and device
CN113940704A (en) Thyroid-based muscle and fascia detection device
Vijayalakshmi et al. Liver tumor detection using CNN
Mouzai et al. Xray-Net: Self-supervised pixel stretching approach to improve low-contrast medical imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant