CN113313685B - Renal tubular atrophy region identification method and system based on deep learning - Google Patents

Renal tubular atrophy region identification method and system based on deep learning Download PDF

Info

Publication number
CN113313685B
CN113313685B CN202110590551.8A CN202110590551A CN113313685B CN 113313685 B CN113313685 B CN 113313685B CN 202110590551 A CN202110590551 A CN 202110590551A CN 113313685 B CN113313685 B CN 113313685B
Authority
CN
China
Prior art keywords
tubular atrophy
area
network
image
roi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110590551.8A
Other languages
Chinese (zh)
Other versions
CN113313685A (en
Inventor
李明
赖叶鑫
王晨
郝芳
李心宇
刘雪宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202110590551.8A priority Critical patent/CN113313685B/en
Publication of CN113313685A publication Critical patent/CN113313685A/en
Application granted granted Critical
Publication of CN113313685B publication Critical patent/CN113313685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of artificial intelligence auxiliary medical treatment, and particularly relates to a renal tubular atrophy area identification method and system based on deep learning, which comprises the following steps: s1, acquiring an image of a kidney pathological section subjected to renal tubular atrophy region labeling; s2, training an example segmentation network based on the image and the corresponding renal tubular atrophy region label; the example segmentation network is an improved mask-RCNN network, a cascade network is added in the network to cascade all detection models, an IOU threshold value which is increased continuously is set to define a sample training model, the output of the former detection model is used as the input of one detection model, and the IOU value is increased all the time; s3, inputting the image to be detected into the trained example segmentation network to obtain the target frame position of each renal tubular atrophy area in the image to be detected; s4, calculating the area and proportion of each renal tubular atrophy area. The invention can realize the detection of the renal tubular atrophy area, and has high detection precision and low omission factor.

Description

Renal tubular atrophy region identification method and system based on deep learning
Technical Field
The invention belongs to the technical field of artificial intelligence auxiliary medical treatment, and particularly relates to a renal tubular atrophy area identification method and system based on deep learning.
Background
Chronic Kidney Disease (CKD) is currently considered by the World Health Organization (WHO) as one of the major public health challenges, and kidney biopsy plays a key role in the diagnosis and treatment of chronic kidney disease. At present, pathological diagnosis of renal biopsy has become an important reference for diagnosis, treatment and prognosis judgment of patients with renal diseases.
The pathology plays a vital role in the medical field, wherein in the diagnosis of chronic kidney disease, after kidney tissues are subjected to puncture by a thick needle, clamping, cutting and excision, the diagnosis result after pathological sections are prepared is the most accurate and reliable, and is the 'gold standard' for the diagnosis of chronic kidney disease. Many renal diseases must be diagnosed by a combination of pathological and clinical information, or even only by pathological information. Pathological sections are therefore important and the accuracy requirements are high. Tubular atrophy is a main manifestation form of chronic kidney disease, when a patient is examined whether the patient has the tubular atrophy, a pathologist is required to observe a pathological section of the kidney of the patient, an authoritative pathologist is required to judge the area ratio of an atrophy area, but the contradiction between medical requirements and resources causes that the traditional medical means cannot meet the requirements of the patient. Some pathology departments in the third three hospitals generate thousands of pathological sections of the kidney every day, and although most of the pathological sections may not have positive results, doctors are required to strictly examine each pathological section under a microscope to judge whether the renal tubules have an atrophied area, so that a great deal of energy is consumed.
With the continuous development of artificial intelligence and deep learning in recent years, the deep learning is increasingly applied to the medical field. In order to assist a pathologist in screening pathological sections and improve work efficiency, a new identification method needs to be provided to realize automatic identification of a renal tubular atrophy area.
Disclosure of Invention
The invention overcomes the defects of the prior art, and solves the technical problems that: the method and the system for identifying the renal tubular atrophy area based on deep learning are provided, so that the automatic identification of the renal tubular atrophy area is realized, and the pathological section screening efficiency of a pathologist is improved.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a renal tubular atrophy region identification method based on deep learning comprises the following steps:
s1, acquiring an image of a kidney pathological section subjected to renal tubular atrophy region labeling;
s2, training an example segmentation network based on the image and the corresponding renal tubular atrophy region label; the training process comprises:
s201, inputting the image into a ResNet101-FPN backbone network for feature extraction to obtain a feature map;
s202, inputting the feature map into an RPN network to obtain an ROI (region of interest);
s203, carrying out size processing on the ROI through an ROI Align layer to obtain a feature map with fixed ROI area size, and carrying out frame regression calculation through a network output module;
s204, setting an IOU threshold value, inputting the feature graph after frame regression with the IOU value larger than the IOU threshold value into the ROI Align layer again for size processing to obtain a feature graph with a fixed ROI size, and performing frame regression calculation through a network output module;
s205, continuously increasing the IOU threshold value, repeating the step S204 until the IOU values of all the characteristic graphs output by the network output module are smaller than the IOU threshold value, and finishing training;
s3, obtaining an image to be detected, inputting the image to be detected into the trained example segmentation network, and obtaining the target frame position of each renal tubular atrophy area in the image to be detected;
and S4, calculating the area and the proportion of each renal tubular atrophy region.
The specific steps of the RPN network to obtain the ROI area are as follows:
inputting the feature map into an RPN network to obtain a candidate frame, mapping the candidate frame to a corresponding feature map to obtain a plurality of candidate ROI areas, sending the candidate ROI areas into the RPN network to perform binary classification and frame regression, and filtering out a part of candidate ROI areas to obtain a final ROI area.
The network output module comprises an FCN network and a deconvolution network, and is also used for mask calculation and classification.
The step S4 specifically includes the following steps:
converting the image into a gray-scale image through a cv2.Cvtcolor function in an OpenCV-python interface;
carrying out binarization transformation on the image by using a cv2.Threshold function;
finding the contour of the identified tubular atrophy region using the cv2.Findcontours function;
calculating the contour area using the cv2 contourarea function to obtain the area of the tubular atrophy region, and calculating the total area of the kidney tissue in the image;
the proportion of the tubular atrophy region was obtained by dividing the area of the tubular atrophy region by the total area of the kidney tissue.
The ROI Align layer is used for carrying out size processing on the ROI area so that the size of the ROI area is fixed to 7*7.
The invention also provides a renal tubular atrophy region identification system based on deep learning, which comprises:
an acquisition unit: the method comprises the steps of obtaining an image of a kidney pathological section marked in a renal tubular atrophy area;
a training unit: training an example segmentation network based on the images and corresponding tubular atrophy region labeling; the training process comprises:
inputting the image into a ResNet101-FPN backbone network for feature extraction to obtain a feature map;
then inputting the feature map into an RPN network to obtain an ROI (region of interest);
setting an IOU threshold, carrying out size processing on the ROI area through an ROI Align layer to obtain a feature map with fixed ROI area size, and carrying out frame regression calculation through a network output module;
inputting the feature map subjected to frame regression with the IOU value larger than the IOU threshold value into the ROI Align layer again for size processing to obtain a feature map with a fixed ROI size, and performing frame regression calculation through a network output module;
continuously increasing the IOU threshold, repeatedly performing ROI area size processing and frame regression calculation until the IOU values of all feature graphs output by the network output module are smaller than the IOU threshold, and finishing training;
a detection unit: the system is used for acquiring an image to be detected, inputting the image to be detected into a trained example segmentation network, and obtaining the position of a target frame of each renal tubular atrophy area in the image to be detected;
a calculation unit: for calculating the area and proportion of each tubular atrophy zone.
The network output module comprises an FCN network and a deconvolution network, and is also used for mask calculation and classification.
The renal tubular atrophy region identification system based on deep learning further comprises:
an output display module: and the area and the proportion of each renal tubular atrophy area are calculated.
In the renal tubular atrophy region identification system based on deep learning, the specific method for calculating the area and the proportion of each renal tubular atrophy region by the calculation unit is as follows:
converting the image into a gray-scale image through a cv2.Cvtcolor function in an OpenCV-python interface;
carrying out binarization transformation on the image by using a cv2.Threshold function;
finding the contour of the identified tubular atrophy region using the cv2.Findcontours function;
calculating the contour area using the cv2 contourarea function to obtain the area of the tubular atrophy region, and calculating the total area of the kidney tissue in the image;
the proportion of the tubular atrophy region was obtained by dividing the area of the tubular atrophy region by the total area of the kidney tissue.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention provides a renal tubular atrophy region identification method and system based on deep learning, wherein a mask-rcnn network is used for training, in order to overcome the condition of missed detection, a cascade network is added into the network to cascade various detection models, sequentially increased IOU threshold values are set to define a sample training model, the former detection model is used as the input of the latter detection model, the network is continuously trained, the detection precision is high, and the network missed detection can be effectively avoided.
2. In the invention, after a picture to be detected is identified by a model trained by a network, the picture is converted into a gray-scale image by using a cv2.Cvtcolor () function in an OpenCV-python interface, then the picture is subjected to binary conversion by using a cv2.Threshold () function, then the contour of the identified tubular atrophy area is searched by using a cv2.FindContours () function, and finally the contour area is calculated by using a cv2.ContourArea () function, so that the area of the tubular atrophy area is obtained. Similarly, the total area of the renal tissue in the picture can be obtained, and then the proportion of the tubular atrophy area can be obtained by dividing the area of the tubular atrophy area by the total area of the renal tissue. The display result is visual and convenient, so that doctors can directly observe the pathological changes of the renal tubular atrophy area, a great deal of time and energy are saved for medical workers, the improvement of the working efficiency of pathological doctors is facilitated, and the display method has important significance in the medical field.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying a renal tubular atrophy area based on deep learning according to an embodiment of the present invention;
FIG. 2 is a simplified block diagram of an example partitioned network in an embodiment of the present invention;
FIG. 3 is a block diagram of an example partitioned network in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a scanned picture being cut and named;
FIG. 5 is a schematic diagram of the artificial labeling of renal tubular atrophy using labelme in an embodiment of the present invention;
FIG. 6 is a diagram illustrating test results using a test set in an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a renal tubular atrophy area identification system based on deep learning according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a result displayed by the output display module in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments; all other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
As shown in fig. 1~3, an embodiment of the present invention provides a renal tubular atrophy area identification method based on deep learning, including the following steps:
s1, acquiring an image of a kidney pathological section marked in a renal tubular atrophy area.
Specifically, an operator fixes a kidney pathological section slide glass on a full-automatic digital pathological section scanner, then the scanner digitizes pathological sections, and since the memory of the digitized pictures is too large and the data reading is too slow during the operation of codes, each picture is sequentially cut into a plurality of small pictures with the same size, and the cut pictures are named according to the rule of 'patient number _ serial number', as shown in fig. 4. And then inputting the digitized picture into an image definition evaluation algorithm for screening. The screened pictures are manually marked out the renal tubular atrophy area by labelme marking software, and a label is set as "RTA", as shown in figure 5.
And S2, training an example segmentation network based on the image and the corresponding renal tubular atrophy region label.
Specifically, the training process includes:
s201, inputting the image into a ResNet101-FPN backbone network for feature extraction to obtain a feature map;
s202, inputting the feature map into an RPN network to obtain an ROI (region of interest);
s203, setting an IOU threshold, carrying out size processing on the ROI through an ROI Align layer to obtain a feature map with fixed ROI size, and carrying out frame regression calculation through a network output module;
s204, inputting the feature graph after frame regression with the IOU value larger than the IOU threshold value into the ROI Align layer again for size processing to obtain a feature graph with a fixed ROI size, and performing frame regression calculation through a network output module;
and S205, continuously increasing the IOU threshold value, repeating the step S204 until the IOU values of all the characteristic graphs output by the network output module are smaller than the IOU threshold value, and finishing the training.
Fig. 2~3 is a schematic structural diagram of an example split network in an embodiment of the present invention. In this embodiment, cascade networks are added to cascade the detection models, the feature graph corresponding to the frame regression result with the IOU value higher than the IOU threshold is input into the detection model again to train the detection models continuously by setting the IOU threshold, and the IOU threshold of each cascade network is continuously increased, so that the training effect can be improved, and missing or false detection of the renal tubular atrophy area can be avoided.
Specifically, in the embodiment, the ResNet101-FPN is used for extracting features for a backbone network (backbone), and the ResNet uses cross-layer connection, so that training is easier; inputting the feature map (feature map) into the RPN network to obtain candidate frames (propassals); mapping the candidate frame to a corresponding feature map to obtain a plurality of candidate ROI (region of interest) with different sizes; sending the candidate ROI areas into an RPN (resilient packet network) to carry out binary classification (foreground or background) and bounding box regression, and filtering out a part of candidate ROI areas; the remaining ROI areas are fixed to be 7*7 in size through the action of the ROI Align layer; finally, each ROI region is classified, border regressed, and mask computed (mask) by the network output module. And then, the obtained frame regression is input into the ROI Align layer again for size processing, and a network output module carries out classification, frame regression and mask calculation (mask), so that the network training effect can be improved, the missing detection is effectively reduced, and the accuracy is improved.
Specifically, as shown in fig. 3, in this embodiment, the network output module includes an FCN network and a deconvolution network, and the network output module is further configured to perform mask calculation and classification.
Specifically, in this embodiment, the labeled data set is further divided into a training set and a test set, the training set is input into the improved instance segmentation network for training, a trained model is obtained, and then the test set data is input into the trained model for testing. The test result shows that the accuracy of the embodiment is about 85%.
And S3, acquiring an image to be detected, inputting the image to be detected into the trained example segmentation network, and obtaining the target frame position of each renal tubular atrophy area in the image to be detected. As shown in fig. 6, a schematic diagram of a target box obtained by detecting a test set image for a trained example segmentation network is shown.
And S4, calculating the area and the proportion of each renal tubular atrophy region.
Specifically, the step S4 specifically includes the following steps:
converting the image into a gray-scale image through a cv2.Cvtcolor function in an OpenCV-python interface;
carrying out binarization transformation on the image by using a cv2.Threshold function;
finding the contour of the identified tubular atrophy region using the cv2.Findcontours function;
calculating the contour area using the cv2 contourarea function to obtain the area of the tubular atrophy region, and calculating the total area of the kidney tissue in the image;
the proportion of the tubular atrophy region was obtained by dividing the area of the tubular atrophy region by the total area of the kidney tissue.
Example two
As shown in fig. 7, a second embodiment of the present invention provides a renal tubular atrophy region identification system based on deep learning, including: the device comprises an acquisition unit, a training unit, a detection unit and a calculation unit.
The acquisition unit is used for acquiring an image of a renal pathological section subjected to renal tubular atrophy region labeling; the training unit is used for training the example segmentation network based on the image and the corresponding renal tubular atrophy region label. The detection unit is used for acquiring an image to be detected, inputting the image to be detected into the trained example segmentation network, and obtaining the position of a target frame of each renal tubular atrophy area in the image to be detected; the calculation unit is used for calculating the area and the proportion of each renal tubular atrophy area.
Specifically, the training process includes:
inputting the image into a ResNet101-FPN backbone network for feature extraction to obtain a feature map;
then inputting the feature map into an RPN network to obtain an ROI (region of interest);
setting an IOU threshold, carrying out size processing on the ROI area through an ROI Align layer to obtain a feature map with fixed ROI area size, and carrying out frame regression calculation through a network output module;
inputting the feature map after frame regression with the IOU value larger than the IOU threshold value into the ROI Align layer again for size processing to obtain a feature map with a fixed ROI size, and performing frame regression calculation through a network output module;
and (4) continuously increasing the IOU threshold value, repeatedly performing ROI (region of interest) size processing and border regression calculation until the IOU values of all the feature maps output by the network output module are smaller than the IOU threshold value, and finishing training.
Specifically, the network output module comprises an FCN network and a deconvolution network, and is further used for performing mask calculation and classification.
Further, in this embodiment, the specific method for calculating the area and the ratio of each tubular atrophy region by the calculation unit is as follows:
converting the image into a gray-scale image through a cv2.Cvtcolor function in an OpenCV-python interface;
carrying out binarization transformation on the image by using a cv2.Threshold function;
finding the contour of the identified tubular atrophy region using the cv2.Findcontours function;
calculating the contour area using the cv2 contourarea function to obtain the area of the tubular atrophy region, and calculating the total area of the kidney tissue in the image;
the proportion of the tubular atrophy region was obtained by dividing the area of the tubular atrophy region by the total area of the kidney tissue.
Further, the system for identifying an area of renal tubular atrophy based on deep learning further comprises: an output display module: and the area and the proportion of each renal tubular atrophy area are calculated. As shown in fig. 8, in order to output the detection result displayed by the display module, each tubular atrophy region is circled in the image, and the area and scale thereof are displayed.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A renal tubular atrophy region identification method based on deep learning is characterized by comprising the following steps:
s1, acquiring an image of a kidney pathological section subjected to renal tubular atrophy region labeling;
s2, training an example segmentation network based on the image and the corresponding renal tubular atrophy region label; the training process comprises:
s201, inputting the image into a ResNet101-FPN backbone network for feature extraction to obtain a feature map;
s202, inputting the feature map into an RPN network to obtain an ROI (region of interest);
s203, carrying out size processing on the ROI through an ROI Align layer to obtain a feature map with fixed ROI area size, and carrying out frame regression calculation through a network output module;
s204, setting an IOU threshold value, inputting the feature graph after frame regression with the IOU value larger than the IOU threshold value into the ROI Align layer again for size processing to obtain a feature graph with a fixed ROI size, and performing frame regression calculation through a network output module;
s205, continuously increasing the IOU threshold value, repeating the step S204 until the IOU values of all the characteristic graphs output by the network output module are smaller than the IOU threshold value, and finishing training;
s3, obtaining an image to be detected, inputting the image to be detected into the trained example segmentation network, and obtaining the target frame position of each renal tubular atrophy area in the image to be detected;
and S4, calculating the area and the proportion of each renal tubular atrophy region.
2. The method for identifying tubular atrophy region based on deep learning of claim 1, wherein the specific steps of obtaining the ROI region by RPN network are as follows:
inputting the feature map into an RPN network to obtain a candidate frame, mapping the candidate frame to a corresponding feature map to obtain a plurality of candidate ROI (region of interest), sending the candidate ROI to the RPN network for binary classification and frame regression, and filtering out a part of candidate ROI to obtain a final ROI.
3. The renal tubular atrophy region identification method based on deep learning of claim 1, wherein the network output module comprises an FCN network and a deconvolution network, and the network output module is further used for mask calculation and classification.
4. The method for identifying an area of renal tubular atrophy based on deep learning of claim 1, wherein the step S4 specifically comprises the steps of:
converting the image into a gray-scale image through a cv2.Cvtcolor function in an OpenCV-python interface;
carrying out binarization transformation on the image by using a cv2.Threshold function;
finding the contour of the identified tubular atrophy region using the cv2.Findcontours function;
calculating the contour area using the cv2 contourarea function to obtain the area of the tubular atrophy region, and calculating the total area of the kidney tissue in the image;
the proportion of the tubular atrophy region was obtained by dividing the area of the tubular atrophy region by the total area of the kidney tissue.
5. The renal tubular atrophy region identification method based on deep learning of claim 1, wherein the ROIAlign layer is used for size processing of the ROI regions, so that the size of each ROI region is fixed to 7*7 pixels.
6. A renal tubular atrophy region identification system based on deep learning, comprising:
an acquisition unit: the system is used for acquiring an image of a kidney pathological section subjected to renal tubular atrophy area labeling;
a training unit: training an example segmentation network based on the images and corresponding tubular atrophy region labeling; the training process comprises:
inputting the image into a ResNet101-FPN backbone network for feature extraction to obtain a feature map;
then inputting the feature map into an RPN network to obtain an ROI (region of interest);
setting an IOU threshold, carrying out size processing on the ROI through an ROI Align layer to obtain a feature map with fixed ROI area size, and carrying out frame regression calculation through a network output module;
inputting the feature map after frame regression with the IOU value larger than the IOU threshold value into the ROI Align layer again for size processing to obtain a feature map with a fixed ROI size, and performing frame regression calculation through a network output module;
continuously increasing the IOU threshold, repeatedly performing ROI area size processing and frame regression calculation until the IOU values of all feature graphs output by the network output module are smaller than the IOU threshold, and finishing training;
a detection unit: the system is used for acquiring an image to be detected, inputting the image to be detected into a trained example segmentation network, and obtaining the position of a target frame of each renal tubular atrophy area in the image to be detected;
a calculation unit: for calculating the area and proportion of each tubular atrophy zone.
7. The renal tubular atrophy region identification system based on deep learning of claim 6, wherein the network output module comprises an FCN network and a deconvolution network, and the network output module is further configured to perform mask calculation and classification.
8. The renal tubular atrophy region identification system based on deep learning of claim 6, further comprising:
an output display module: and the area and the proportion of each renal tubular atrophy area are calculated.
9. The renal tubular atrophy region identification system based on deep learning of claim 6, wherein the specific method for calculating the area and the proportion of each renal tubular atrophy region by the calculation unit is as follows:
converting the image into a gray-scale image through a cv2.Cvtcolor function in an OpenCV-python interface;
carrying out binarization transformation on the image by using a cv2.Threshold function;
finding the contour of the identified tubular atrophy region using the cv2.Findcontours function;
calculating the contour area using the cv2 contourarea function to obtain the area of the tubular atrophy region, and calculating the total area of the kidney tissue in the image;
the proportion of the tubular atrophy region was obtained by dividing the area of the tubular atrophy region by the total area of the kidney tissue.
CN202110590551.8A 2021-05-28 2021-05-28 Renal tubular atrophy region identification method and system based on deep learning Active CN113313685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110590551.8A CN113313685B (en) 2021-05-28 2021-05-28 Renal tubular atrophy region identification method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110590551.8A CN113313685B (en) 2021-05-28 2021-05-28 Renal tubular atrophy region identification method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN113313685A CN113313685A (en) 2021-08-27
CN113313685B true CN113313685B (en) 2022-11-29

Family

ID=77375808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110590551.8A Active CN113313685B (en) 2021-05-28 2021-05-28 Renal tubular atrophy region identification method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN113313685B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798416A (en) * 2019-06-20 2020-10-20 太原理工大学 Intelligent glomerulus detection method and system based on pathological image and deep learning
CN112508854A (en) * 2020-11-13 2021-03-16 杭州医派智能科技有限公司 Renal tubule detection and segmentation method based on UNET
CN112712522A (en) * 2020-10-30 2021-04-27 陕西师范大学 Automatic segmentation method for oral cancer epithelial tissue region of pathological image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210104321A1 (en) * 2018-11-15 2021-04-08 Ampel Biosolutions, Llc Machine learning disease prediction and treatment prioritization
CN110473167B (en) * 2019-07-09 2022-06-17 哈尔滨工程大学 Deep learning-based urinary sediment image recognition system and method
US11645753B2 (en) * 2019-11-27 2023-05-09 Case Western Reserve University Deep learning-based multi-site, multi-primitive segmentation for nephropathology using renal biopsy whole slide images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798416A (en) * 2019-06-20 2020-10-20 太原理工大学 Intelligent glomerulus detection method and system based on pathological image and deep learning
CN112712522A (en) * 2020-10-30 2021-04-27 陕西师范大学 Automatic segmentation method for oral cancer epithelial tissue region of pathological image
CN112508854A (en) * 2020-11-13 2021-03-16 杭州医派智能科技有限公司 Renal tubule detection and segmentation method based on UNET

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Object Detection from Scratch with Deep Supervision";Zhiqiang Shen等;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;20201231;396-412 *
"人工智能在肾脏病理诊断中的应用";卓莉等;《中华肾病研究电子杂志》;20200630;第9卷(第3期);135-137 *

Also Published As

Publication number Publication date
CN113313685A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN108665456B (en) Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence
CN110245657B (en) Pathological image similarity detection method and detection device
CN108133476B (en) Method and system for automatically detecting pulmonary nodules
Cruz et al. Determination of blood components (WBCs, RBCs, and Platelets) count in microscopic images using image processing and analysis
CN111028206A (en) Prostate cancer automatic detection and classification system based on deep learning
US11645753B2 (en) Deep learning-based multi-site, multi-primitive segmentation for nephropathology using renal biopsy whole slide images
CN112380900A (en) Deep learning-based cervical fluid-based cell digital image classification method and system
CN111488921A (en) Panoramic digital pathological image intelligent analysis system and method
CN110796661B (en) Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN108257129A (en) The recognition methods of cervical biopsy region aids and device based on multi-modal detection network
CN115909006B (en) Mammary tissue image classification method and system based on convolution transducer
CN115526834A (en) Immunofluorescence image detection method and device, equipment and storage medium
CN110807754B (en) Fungus microscopic image segmentation detection method and system based on deep semantic segmentation
CN113160175B (en) Tumor lymphatic vessel infiltration detection method based on cascade network
CN115206495A (en) Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device
CN111401102A (en) Deep learning model training method and device, electronic equipment and storage medium
CN112801940A (en) Model evaluation method, device, equipment and medium
CN113313685B (en) Renal tubular atrophy region identification method and system based on deep learning
CN112002407A (en) Breast cancer diagnosis device and method based on ultrasonic video
CN115393314A (en) Deep learning-based oral medical image identification method and system
CN114742803A (en) Platelet aggregation detection method combining deep learning and digital image processing algorithm
CN113869124A (en) Deep learning-based blood cell morphology classification method and system
CN116524315A (en) Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method
CN111612755A (en) Lung focus analysis method, device, electronic equipment and storage medium
CN113723441B (en) Intelligent analysis system and method for lip gland pathology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant