CN112819819A - Pneumoconiosis grading method, device, medium and equipment based on deep learning - Google Patents

Pneumoconiosis grading method, device, medium and equipment based on deep learning Download PDF

Info

Publication number
CN112819819A
CN112819819A CN202110218806.8A CN202110218806A CN112819819A CN 112819819 A CN112819819 A CN 112819819A CN 202110218806 A CN202110218806 A CN 202110218806A CN 112819819 A CN112819819 A CN 112819819A
Authority
CN
China
Prior art keywords
lung
feature map
pneumoconiosis
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110218806.8A
Other languages
Chinese (zh)
Inventor
景万里
刘兴旺
殷东雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN202110218806.8A priority Critical patent/CN112819819A/en
Publication of CN112819819A publication Critical patent/CN112819819A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure provides a pneumoconiosis grading method, device, medium and equipment based on deep learning, and relates to the technical field of machine learning. The method comprises the following steps: carrying out feature extraction on the lung image of the patient to generate a corresponding first lung feature map, wherein the first lung feature map comprises an ROI; carrying out mask prediction segmentation on the ROI of the first lung feature map to obtain a second lung feature map; fusing the first lung feature map and the second lung feature map; and determining the pneumoconiosis grade corresponding to the lung image based on the fusion result. According to the technical scheme of the embodiment of the disclosure, the accuracy of pneumoconiosis classification can be improved, the labor cost is reduced, and subjective errors caused by manual detection are avoided.

Description

Pneumoconiosis grading method, device, medium and equipment based on deep learning
Technical Field
The present disclosure relates to the field of display technologies, and in particular, to a pneumoconiosis classification method based on deep learning, a pneumoconiosis classification device based on deep learning, a computer-readable medium, and an electronic device.
Background
Pneumoconiosis is a systemic disease mainly characterized by diffuse fibrosis of lung tissue due to long-term inhalation of productive dust. Pneumoconiosis is an occupational disease with high incidence and serious harm, so that the emphasis on the differential diagnosis of chest radiographs of patients with pneumoconiosis is focused on.
In one technical scheme, a computer-aided detection system-based calculation and analysis method of pneumoconiosis chest radiographs is adopted to detect small circular or irregular shadows on the chest radiographs, and then international labor organization pneumoconiosis standard radiographs are compared manually to grade the chest radiographs into normal or pneumoconiosis chest radiographs.
However, due to the diversity and complexity of the shadows of chest films of different pneumoconiosis, manual comparison is not only inefficient in detection, but also difficult to ensure the accuracy of the detection result.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a pneumoconiosis classification method based on deep learning, a pneumoconiosis classification device based on deep learning, a computer-readable medium, and an electronic device, so as to improve the detection efficiency of a chest radiograph image of a pneumoconiosis at least to a certain extent, avoid subjective errors caused by manual detection, and improve the accuracy of a detection result.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the embodiments of the present disclosure, there is provided a pneumoconiosis classification method based on deep learning, including: performing feature extraction on a lung image of a patient to generate a corresponding first lung feature map, wherein the first lung feature map comprises a plurality of interested regions ROI; performing mask prediction segmentation on the ROI of the first lung feature map to obtain a second lung feature map; fusing the first lung feature map with the second lung feature map; and determining the pneumoconiosis grade corresponding to the lung image based on the fusion result.
In some example embodiments of the present disclosure, the mask predictive segmentation of the ROI of the first lung feature map comprises: performing mask prediction segmentation on each ROI in the first lung feature map through a full convolution network to obtain a plurality of mask regions; determining whether each pixel in the mask region is greater than or equal to a predetermined threshold; if the pixel is greater than or equal to the predetermined threshold, determining that the pixel belongs to a lung region.
In some example embodiments of the present disclosure, the fusing the first lung feature map with the second lung feature map comprises: determining weights for the respective mask regions of the second lung feature map; based on the weights, a feature of the first lung feature map is weight fused with a corresponding feature of the second lung feature map.
In some example embodiments of the present disclosure, the determining a pneumoconiosis grade corresponding to the lung image based on the fusion result includes: obtaining a fusion lung feature map of the lung image based on the fusion result; and inputting the fused lung feature map into a classification network, and determining the pneumoconiosis grade of each lung region of the lung image through the classification network.
In some example embodiments of the present disclosure, the method further comprises: dividing the ROI of the first lung feature map into a plurality of feature regions; and carrying out normalization processing on each feature region to obtain the normalized first lung feature map.
In some example embodiments of the present disclosure, the performing feature extraction on the lung image of the patient to generate a corresponding first lung feature map includes: obtaining a feature pyramid of the lung image of the patient, the feature pyramid comprising a plurality of first feature layers; performing upsampling processing on each first feature layer of the feature pyramid to generate a corresponding second feature layer; and carrying out fusion processing on the first characteristic layer and the second characteristic layer to generate a corresponding first lung characteristic map.
In some example embodiments of the present disclosure, the method further comprises: preprocessing an image of the patient's lungs, the preprocessing including rib tissue removal processing and image enhancement processing.
According to a second aspect of embodiments of the present disclosure, there is provided a pneumoconiosis classification device based on deep learning, including: the feature extraction module is used for performing feature extraction on the lung image of the patient to generate a corresponding first lung feature map, wherein the first lung feature map comprises a plurality of interested regions ROI; the mask segmentation module is used for performing mask prediction segmentation on the ROI of the first lung feature map to obtain a second lung feature map; a feature fusion module for fusing the first lung feature map and the second lung feature map; and the classification module is used for determining the pneumoconiosis grade corresponding to the lung image based on the fusion result.
In some example embodiments of the present disclosure, the mask segmentation module is further to: performing mask prediction segmentation on each ROI in the first lung feature map through a full convolution network to obtain a plurality of mask regions; determining whether each pixel in the mask region is greater than or equal to a predetermined threshold; if the pixel is greater than or equal to the predetermined threshold, determining that the pixel belongs to a lung region.
In some example embodiments of the present disclosure, the feature fusion module is further configured to: determining weights for the respective mask regions of the second lung feature map; based on the weights, a feature of the first lung feature map is weight fused with a corresponding feature of the second lung feature map.
In some example embodiments of the present disclosure, the classification module is further to: obtaining a fusion lung feature map of the lung image based on the fusion result; and inputting the fused lung feature map into a classification network, and determining the pneumoconiosis grade of each lung region of the lung image through the classification network.
In some example embodiments of the present disclosure, the apparatus further comprises: a dividing module to divide the ROI of the first lung feature map into a plurality of feature regions; and the normalization module is used for performing normalization processing on each feature region to obtain the normalized first lung feature map.
In some example embodiments of the present disclosure, the feature extraction module is further configured to: obtaining a feature pyramid of the lung image of the patient, the feature pyramid comprising a plurality of first feature layers; performing upsampling processing on each first feature layer of the feature pyramid to generate a corresponding second feature layer; and carrying out fusion processing on the first characteristic layer and the second characteristic layer to generate a corresponding first lung characteristic map.
In some example embodiments of the present disclosure, the apparatus further comprises: and the preprocessing module is used for preprocessing the lung image of the patient, and the preprocessing comprises rib tissue removal processing and image enhancement processing.
According to a third aspect of embodiments of the present disclosure, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the deep learning based pneumoconiosis classification method as described in the first aspect of the embodiments above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the deep learning-based pneumoconiosis classification method as described in the first aspect of the embodiments above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the technical solution of some embodiments of the present disclosure, on one hand, mask prediction segmentation is performed on the ROI of the first lung feature map to generate a second lung feature map including a plurality of mask regions, and since mask branches are added to perform mask prediction segmentation processing, accuracy of lung region segmentation is improved; on the other hand, the first lung characteristic diagram and the second lung characteristic diagram are fused, the pneumoconiosis grade corresponding to the lung image is determined based on the fusion result, and the mask branch and the classification branch are fused, so that the accuracy of pneumoconiosis grading is improved; on the other hand, the pneumoconiosis grade of the pneumoconiosis region can be automatically determined, so that the detection efficiency of the pneumoconiosis chest image can be improved, the labor cost is reduced, the subjective error caused by manual detection is avoided, and the accuracy of pneumoconiosis grading is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 shows a schematic diagram of an application scenario of a pneumoconiosis classification method based on deep learning in an exemplary embodiment of the present disclosure;
fig. 2 illustrates a flow diagram of a pneumoconiosis classification method based on deep learning, according to some example embodiments of the present disclosure;
fig. 3 shows a structural schematic of a deep learning network, according to some example embodiments of the present disclosure;
FIG. 4 shows a schematic diagram of a feature extraction network, according to some example embodiments of the present disclosure;
fig. 5 shows a schematic flow diagram of a pneumoconiosis classification method according to further exemplary embodiments of the present disclosure;
FIG. 6 shows a resulting schematic of mask segmentation of a pneumoconiosis image, according to some example embodiments of the present disclosure;
fig. 7 shows a schematic structural diagram of a pneumoconiosis classification device based on deep learning according to an embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In order to clearly explain technical solutions in the embodiments of the present disclosure, before specifically developing and explaining the embodiments of the present disclosure, some terms applied in the embodiments are first described.
PACS (Picture Archiving and Communication Systems): the digital medical image acquisition system is used for storing various daily medical images (including images generated by equipment such as nuclear magnetism, CT, ultrasound, various X-ray machines, various infrared instruments, microscopy instruments and the like) in a digital mode through various interfaces, can be quickly called back for use under certain authorization when needed, and is added with auxiliary diagnosis management functions.
ROI (Region Of Interest): in machine vision and image processing, a region to be processed is outlined in the form of a box, a circle, an ellipse, an irregular polygon, or the like from a processed image, and is called a region of interest.
Mask prediction segmentation: and predicting and segmenting the ROI in the feature map by the aid of the MASK branch, and segmenting a region where the target is located, namely a MASK MASK region.
Hereinafter, a pneumoconiosis classification method based on deep learning in an exemplary embodiment of the present disclosure will be described in detail with reference to the drawings.
Fig. 1 illustrates a schematic diagram of an application scenario provided in accordance with some embodiments of the present disclosure. The application scenario 100 may include: terminal device 110, and server 120. The terminal device 110 may be a Personal Computer (PC), a tablet Computer, a PACS system, or the like.
For obtaining or storing a pneumoconiosis chest radiograph of a patient, such as an X-ray pneumoconiosis chest radiograph of a patient, in the terminal device 110, in the exemplary embodiment, the terminal device 110 is a PACS system.
Server 120 is configured to provide background services for clients of applications in terminal device 110. For example, the server 120 may be a backend server for the application described above. The server 120 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center.
Terminal device 110 and server 120 may communicate with each other via network 130. The Network 130 may be a wired Network or a wireless Network, and the Network 130 may be a PSTN (Public Switched Telephone Network) or the internet.
The terminal device 110 sends the pneumoconiosis chest radiograph of the patient to the server 120 through the network 130, the server 120 determines the pneumoconiosis grade of each pneumoconiosis region of the pneumoconiosis chest radiograph by using the deep learning-based pneumoconiosis grading method according to the exemplary embodiment of the present application, returns the obtained grading result to the terminal device 110 through the network 130, and displays the grading result on the terminal device 110.
Fig. 2 illustrates a flow diagram of a pneumoconiosis classification method based on deep learning, according to some example embodiments of the present disclosure. The execution subject of the deep learning-based pneumoconiosis classification method provided by the embodiment of the present disclosure may be a computing device having a computing processing function, such as the server 120 of fig. 2. The method for classifying pneumoconiosis based on deep learning includes steps S210 to S240, and the method for classifying pneumoconiosis based on deep learning in the exemplary embodiment will be described in detail with reference to the accompanying drawings.
Referring to fig. 2, in step S210, a lung image of a patient is subjected to feature extraction to generate a corresponding first lung feature map, where the first lung feature map includes a plurality of ROIs.
In an example embodiment, a lung image of a patient is acquired, and feature extraction is performed on the lung image of the patient through a feature extraction network to generate a corresponding first lung feature map. Further, multiple ROIs may be set on the first lung feature map, for example, in the lung region of a lung image of a patient.
For example, a lung image of a patient, such as an X-ray pneumoconiosis chest radiograph, is acquired from an imaging device or a PACS system, and feature extraction is performed on the lung image of the patient through a residual error network ResNet network and a feature pyramid network FPN to generate a corresponding first lung feature map. A plurality of ROIs are extracted in the lung region of the first lung feature map by means of an RPN (region generation Network), the extracted ROIs being mapped onto the first lung feature map.
In step S220, a mask prediction segmentation is performed on the ROI of the first lung feature map to obtain a second lung feature map.
In an exemplary embodiment, the MASK prediction segmentation is performed on each ROI of the first lung feature map by a MASK prediction network, resulting in a second lung feature map comprising a plurality of MASK regions.
For example, assuming the MASK prediction network is an FCN (full volume Networks) network, each ROI of the first lung feature map is subjected to predictive segmentation by the FCN network, and a MASK region of m × m is predicted from each ROI to generate the second lung feature map.
Further, in an example embodiment, it is determined whether each pixel of the MASK region is greater than or equal to a predetermined threshold, and if so, the pixel is determined to belong to the lung region, and if less than the predetermined threshold, the pixel is determined to belong to the non-lung region. For example, assuming that the predetermined threshold is 0.5, determining whether each pixel of the MASK region is greater than or equal to 0.5 through the sigmod function, and if the pixel is greater than or equal to 05, the pixel is the lung region; if less than 0.5, the pixel is a non-lung region.
In step S230, the first lung feature map and the second lung feature map are fused.
In an exemplary embodiment, features in the first lung feature map are fused with features in the second lung feature map. For example, features in the first lung feature map may be fused with features in the second lung feature map by concatenating concat, e.g., a ROI in the first lung feature map with a mask region of the second lung feature map.
It should be noted that although the concat mode is taken as an example for description, it should be understood by those skilled in the art that the fusion may be performed in other suitable modes, such as add mode, and the like, and the scope of the present disclosure is also included.
In step S240, a pneumoconiosis grade corresponding to the lung image is determined based on the fusion result.
In an exemplary embodiment, a fused lung feature map of the lung image is obtained based on the fusion result, the fused lung feature map comprising a fused feature of the first lung feature map and the second lung feature map; and inputting the fused lung feature map into a classification network, and determining the pneumoconiosis grade of each lung region of the lung image through the classification network.
For example, in an exemplary embodiment, the fused lung feature map is input to the full-link layer, the respective features in the fused lung feature map are weighted by the full-link layer, the result of the weighting operation is input to the softmax function, the respective lung regions are determined to belong to that class by the softmax function, e.g., stage 0, stage1, stage2, stage3 pneumoconiosis, etc., and the classification probability vector cls _ prob is output.
According to the technical solution in the exemplary embodiment of fig. 2, on one hand, mask prediction segmentation is performed on the ROI of the first lung feature map to generate a second lung feature map including a plurality of mask regions, and since mask branches are added to perform mask prediction segmentation processing, accuracy of lung region segmentation is improved; on the other hand, the first lung characteristic diagram and the second lung characteristic diagram are fused, the pneumoconiosis grade corresponding to the lung image is determined based on the fusion result, and the mask branch and the classification branch are fused, so that the accuracy of pneumoconiosis grading is improved; on the other hand, the pneumoconiosis grade of the pneumoconiosis region can be automatically determined, so that the detection efficiency of the pneumoconiosis chest image can be improved, the labor cost is reduced, the subjective error caused by manual detection is avoided, and the accuracy of pneumoconiosis grading is further improved.
Further, in order to make the pneumoconiosis classification result more accurate and more consistent with the diagnosis logic of the doctor, the weights of the respective lung regions of the pneumoconiosis image are set in advance. Thus, in an exemplary embodiment, fusing the first lung feature map with the second lung feature map comprises: determining weights for the mask regions of the second lung feature map; based on the weights, features of the first lung feature map are weighted-fused with corresponding features of the second lung feature map. For example, a weighted fusion of features in the first lung feature map and features in the second lung feature map, such as a weighted fusion of a ROI in the first lung feature map and mask regions of the second lung feature map, may be performed by concatenating concat.
Different weight coefficients are given to different small areas of the pneumoconiosis region or the mask region of the pneumoconiosis image, so that the pneumoconiosis image more accords with the diagnosis logic of doctors, and the accuracy of pneumoconiosis grading is further improved.
Fig. 3 illustrates a structural schematic of a deep learning network, according to some example embodiments of the present disclosure.
Referring to fig. 3, the feature extraction network 310 extracts multi-layer features of the pneumoconiosis image of the patient, for example, four feature layers of the features (P2, P3, P4, P5), and performs fusion processing on each feature layer by upsampling to generate a first pneumoconiosis feature map.
The convolution layer 315 is used to convolve the first pneumoconiosis feature map, for example, the convolution kernel of sample 3 x 3 convolves the first pneumoconiosis feature map that has been fused. By this convolution processing, an aliasing effect (aliasing effect) at the time of upsampling and merging can be eliminated.
The normalization layer 320 is used to normalize the ROI region of the first pneumoconiosis feature map. For example, the ROI region of the first pneumoconiosis feature map is divided into 3 × 3 meshes, and the maximum pooling process is performed on each part of the mesh to realize a fixed-length output.
The MASK prediction network 325 is configured to perform MASK prediction segmentation on each ROI of the normalized first pneumoconiosis feature map to obtain a second pneumoconiosis feature map, where the second pneumoconiosis feature map includes a plurality of MASK regions 330. For example, the MASK prediction network 325 performs MASK prediction segmentation on each ROI of the normalized first pneumoconiosis feature map through the full convolution network FCN, and predicts an m × m size MASK region 330 from each ROI.
The classification network 340 performs fusion processing on the features of the first pneumoconiosis feature map and the second pneumoconiosis feature map, and classifies each pneumoconiosis region of the pneumoconiosis image based on the fused features. The classification network 340 includes a fully connected layer 342, a fully connected layer 344, and a softmax network 346, for example, the classification network 340 fuses the features of the first pneumoconiosis feature map and the second pneumoconiosis feature map, performs a weighting operation on the fused features through the fully connected layer 342 and the fully connected layer 344, inputs the weighted feature vectors into the softmax network 346, and determines the grades, such as 0, 1, 2, and 3, of each pneumoconiosis region of the pneumoconiosis image through the softmax network 346.
According to the technical solution in the exemplary embodiment of fig. 3, on the one hand, segmentation and classification of pneumoconiosis images is achieved by a deep network structure method, solving the problems of local convergence and false positive segmentation encountered when segmenting lung tissue using conventional methods; on the other hand, by adding mask division branches and fusing mask branch characteristics and classification branch characteristics, the accuracy of pneumoconiosis classification is improved, and the number of layers of characteristic extraction is reduced; on the other hand, regression branches of the target detection area are removed, and the classification and segmentation speed is greatly improved.
Further, in an example embodiment, the deep network structure of FIG. 3 is trained by the Focal local Loss function. The Focal local Loss function can reduce the Loss of the samples which are easy to be divided and increase the Loss of the samples which are difficult to be divided, so that the model is focused on the learning of the samples which are difficult to be divided, and the problem of unbalanced data distribution of the pneumoconiosis can be solved.
Fig. 4 shows a schematic diagram of a feature extraction network, according to some example embodiments of the present disclosure.
Referring to fig. 4, the left residual network of the feature extraction network, ResNet-FPN, includes 3 segments, a bottom-up connection segment, a top-down connection segment, and a horizontal connection segment. The details of these sections are described below in conjunction with the accompanying drawings.
(1) Bottom-up connection
Referring to fig. 4, the bottom-up connecting portion extracts features from the pneumoconiosis image through the convolution layers 1 to 5 from bottom to top. That is, taking the residual error network ResNet as a skeleton network, the bottom-up feature extraction process is divided into 5 stages according to the size of the feature map. The characteristic features conv2, conv3, conv4 and conv5 output by the last layer of each of the stage2, stage3, stage4 and stage5 are respectively defined as C2, C3, C4 and C5, and the step size stride of the characteristic layers C2, C3, C4 and C5 relative to the original pneumoconiosis image is {4,8,16 and 32 }. It should be noted that in some embodiments, the feature conv1 of stage1 is not used for memory reasons.
(2) A top-down portion and a transverse connecting portion
The top-down portion is to up-sample the feature map starting from the highest level. The nearest neighbor upsampling is directly used for upsampling, and is adopted, on one hand, the operation of the nearest neighbor upsampling is simple, and on the other hand, the training parameters can be reduced by adopting the nearest neighbor upsampling.
The horizontal connection part fuses the up-sampled result and the feature map of the same size generated from bottom to top. That is, each feature layer of the feature layers C2, C3, C4 and C5 is subjected to a convolution 1x1 operation (1x1 convolution is used to reduce the number of channels), the output channels are all set to the same 256 channels, and then fused or summed with the upsampled corresponding feature layer feature map to generate the corresponding feature layer M2, M3, M4 and M5.
Further, after the merging, the feature layers M2, M3, M4 and M5 which are already merged are further checked by using a convolution kernel of 3 × 3 to perform convolution processing, so as to generate feature layers P2, P3, P4 and P5, and the purpose of using the convolution kernel of 3 × 3 is to eliminate aliasing effect (aliasing effect) of upsampling.
Furthermore, in an example embodiment, the FPN network further includes 3 × 3 convolution layers and 1 × 1 convolution layers, and the category feature and the position feature of the ROI in the lung image are extracted through the 3 × 3 convolution layers and 1 × 1 convolution layers, the category feature may be a thickness or shape feature of a pneumoconiosis image shadow, and the position feature may be a position feature of the pneumoconiosis image shadow.
Fig. 5 shows a flow diagram of a pneumoconiosis classification method according to further example embodiments of the present disclosure.
Referring to fig. 5, in step S510, an image of a pneumoconiosis of a patient is acquired.
In an exemplary embodiment, an X-ray pneumoconiosis chest picture of a patient is acquired, for example, by an imaging device or PACS system.
In step S520, the pneumoconiosis image of the patient is pre-processed.
In an exemplary embodiment, the pre-processing includes rib tissue removal processing and image enhancement processing, that is, rib tissue removal and image enhancement processing in an X-ray pneumoconiosis chest radiograph of a patient.
In step S530, feature extraction is performed on the preprocessed pneumoconiosis image, and a first pneumoconiosis feature map is generated.
In an example embodiment, feature extraction is performed on the preprocessed pneumoconiosis image through a ResNet-FPN network to obtain a plurality of feature layers, and fusion processing is performed on each feature layer to generate a first pneumoconiosis feature map.
For example, feature extraction is performed on the preprocessed pneumoconiosis image through a ResNet-FPN network to obtain a plurality of first feature layers, such as C2, C3, C4 and C5, upsampling is performed on each first feature layer to obtain corresponding second feature layers M2, M3, M4 and M5, the first feature layers and the second feature layers are subjected to fusion processing to obtain four feature layers of P2, P3, P4 and P5, wherein the channel number of each feature layer is 256.
Further, fusing the four feature layers (P2, P3, P4, P5) to concat results in a first pneumoconiosis feature map F, where F ═ C (P2, P3, P4, P5) ═ P2| | | Upx2(P3) | Upx4(P4) | | Upx8(P5), where "| |" represents concat. Upx2, Upx4, Upx8 represent 2-fold, 4-fold and 8-fold upsampling, respectively. The first pneumoconiosis feature map F is fed into the convolutional network, e.g., the Conv (3,3) -BN-ReLU layer, and the number of channels of the first pneumoconiosis feature map is changed to 256.
In step S540, the ROI in the first pneumoconiosis feature map is normalized.
In an exemplary embodiment, since the lungs of each pneumoconiosis patient have some variance, normalized Align processing is required for the ROI in the lung image of the patient. For example, dividing the ROI of the lung image of the pneumoconiosis patient, i.e., the feature map region of the lung region, into 3 × 3 meshes horizontally, performing max Pooling processing on each of the meshes, and realizing fixed-length output.
In step S550, the normalized first pneumoconiosis feature map is subjected to mask prediction segmentation to generate a second pneumoconiosis feature map.
In an example embodiment, in the MASK prediction branch processing, each ROI of the first pneumoconiosis map is MASK prediction segmented using an FCN network that predicts an m × m sized MASK region from each ROI, so that each layer in the MASK branch can maintain a spatial layout of m × m.
Further, it is determined whether each pixel of the MASK region is greater than or equal to a predetermined threshold, and if so, the pixel is determined to belong to the lung region, and if less than the predetermined threshold, the pixel is determined to belong to the non-lung region. For example, assuming that the predetermined threshold is 0.5, determining whether each pixel of the MASK region is greater than or equal to 0.5 through the sigmod function, and if the pixel is greater than or equal to 05, the pixel is the lung region; if less than 0.5, the pixel is a non-lung region.
In step S560, the first pneumoconiosis feature map and the second pneumoconiosis feature map are fused, and the pneumoconiosis images are classified based on the fusion result.
In an exemplary embodiment, the obtained first pneumoconiosis feature map feature maps and the second pneumoconiosis feature map feature maps of the MASK prediction branches are subjected to fusion processing, the fusion feature maps after the fusion processing are input into a full-connection connect layer, each lung region is determined to belong to a specific category (such as 0 stage, 1 stage, 2 stage, 3 stage and the like) through the full-connection connect layer and a softmax layer, and the softmax layer outputs cls _ prob probability vectors corresponding to the pneumoconiosis levels of the lung regions of the pneumoconiosis image.
Further, in an example embodiment, after the classification result of the pneumoconiosis image is obtained, the classification result of the pneumoconiosis image is displayed on the terminal device 110, such as a PACS system, to assist a doctor in diagnosis. If a doctor in the secondary medical institution has a difficult case, the classification result of the pneumoconiosis image of the case is uploaded to the server 120, such as a multi-mode image AI auxiliary diagnosis cloud platform, so that the doctor in the tertiary medical institution performs recheck, and the recheck result is fed back to the doctor in the secondary medical institution through the cloud platform.
According to the technical solution in the exemplary embodiment of fig. 5, on one hand, the ROI of the first lung feature map is subjected to mask prediction segmentation to generate a second lung feature map containing a plurality of mask regions, and the accuracy of lung region segmentation is improved by adding a mask branch network; on the other hand, the first lung characteristic diagram and the second lung characteristic diagram are fused, the pneumoconiosis grade corresponding to the mask region of the lung image is determined based on the fusion result, and the mask branch and the classification branch characteristics are fused, so that the accuracy of pneumoconiosis grading is improved; on the other hand, the pneumoconiosis grade of the pneumoconiosis region can be automatically determined, so that the labor cost can be reduced, subjective errors caused by manual detection are avoided, and the accuracy of pneumoconiosis grading is further improved.
In an example embodiment, the segmentation result of MASK prediction segmentation using an embodiment of the present disclosure may be evaluated using an average cross-over ratio, which is calculated as shown in the following formula (1):
Figure BDA0002953559060000131
where i represents a label value, j represents a predicted value, njiThe expression predicts i as j, and the above formula (1) represents the ratio of the intersection and union of the label value and the predicted value, and this value may also represent the ratio of true positive and true negative, false positive and false negative.
Fig. 6 shows a resulting schematic of mask segmentation of a pneumoconiosis image, according to some example embodiments of the present disclosure.
Referring to fig. 6, the left image in fig. 6 is an original image of a pneumoconiosis image, and the middle image is a schematic diagram of a result of the network segmentation of the Unet; the right diagram is a diagram illustrating the result of the MASK prediction partitioning network partitioning according to the technical solution in the example embodiment of the present disclosure. It can be seen that the segmentation result of the right graph has better segmentation effect than that of the middle graph.
Table 1 below shows a comparison table of the pneumoconiosis four classification results for the solution of the examples of the present disclosure with several prior art solutions.
TABLE 1 comparison of four Classification results for pneumoconiosis
Method F1 score
inceptionV2 network 70.4%
Resnet-34 network 71.2%
Deep learning network of the disclosed embodiments 72.1%
Referring to table 1 above, with the deep learning network according to the embodiment of the present disclosure, the F1 score of the classification result of F1 is 72.1%, which is greater than the F1 score of the inception v2 network and the Resnet-34 network, indicating that the deep learning network according to the embodiment of the present disclosure has a better effect.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Embodiments of the apparatus of the present disclosure are described below, which may be used to perform the above-described deep learning based pneumoconiosis classification method of the present disclosure.
Fig. 7 shows a schematic structural diagram of a pneumoconiosis classification device based on deep learning according to an embodiment of the present disclosure.
Referring to fig. 7, the apparatus 700 for classifying pneumoconiosis based on deep learning includes: the feature extraction module 710 is configured to perform feature extraction on a lung image of a patient to generate a corresponding first lung feature map, where the first lung feature map includes a plurality of ROI regions of interest; a mask segmentation module 720, configured to perform mask prediction segmentation on the ROI of the first lung feature map to obtain a second lung feature map; a feature fusion module 730, configured to fuse the first lung feature map and the second lung feature map; and a classification module 740 configured to determine a pneumoconiosis grade corresponding to the lung image based on the fusion result.
In some example embodiments of the present disclosure, the mask segmentation module 720 is further configured to: performing mask prediction segmentation on each ROI in the first lung feature map through a full convolution network to obtain a plurality of mask regions; determining whether each pixel in the mask region is greater than or equal to a predetermined threshold; if the pixel is greater than or equal to the predetermined threshold, determining that the pixel belongs to a lung region.
In some example embodiments of the present disclosure, the feature fusion module 730 is further configured to: determining weights for the respective mask regions of the second lung feature map; based on the weights, a feature of the first lung feature map is weight fused with a corresponding feature of the second lung feature map.
In some example embodiments of the present disclosure, the classification module 740 is further configured to: obtaining a fusion lung feature map of the lung image based on the fusion result; and inputting the fused lung feature map into a classification network, and determining the pneumoconiosis grade of each lung region of the lung image through the classification network.
In some example embodiments of the present disclosure, the apparatus 700 further comprises: a dividing module to divide the ROI of the first lung feature map into a plurality of feature regions; and the normalization module is used for performing normalization processing on each feature region to obtain the normalized first lung feature map.
In some example embodiments of the present disclosure, the feature extraction module 710 is further configured to: obtaining a feature pyramid of the lung image of the patient, the feature pyramid comprising a plurality of first feature layers; performing upsampling processing on each first feature layer of the feature pyramid to generate a corresponding second feature layer; and carrying out fusion processing on the first characteristic layer and the second characteristic layer to generate a corresponding first lung characteristic map.
In some example embodiments of the present disclosure, the apparatus 700 further comprises: and the preprocessing module is used for preprocessing the lung image of the patient, and the preprocessing comprises rib tissue removal processing and image enhancement processing.
For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the above-mentioned pneumoconiosis classifying method based on deep learning of the present disclosure for details not disclosed in the embodiments of the apparatus of the present disclosure.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer storage medium capable of implementing the above method. On which a program product capable of implementing the above-described method of the present specification is stored. In some possible embodiments, various aspects of the present disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section above of this specification when the program product is run on the terminal device.
The program product may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to this embodiment of the disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 8, electronic device 800 is in the form of a general purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, and a bus 830 that couples the various system components including the memory unit 820 and the processing unit 810.
Wherein the storage unit stores program codes, and the program codes can be executed by the processing unit 810, so that the processing unit 810 executes the steps according to various exemplary embodiments of the present disclosure described in the "exemplary method" section above in this specification. For example, the processing unit 810 may perform the following as shown in fig. 2: step S210, carrying out feature extraction on the lung image of the patient to generate a corresponding first lung feature map, wherein the first lung feature map comprises a plurality of ROIs; step S220, mask prediction segmentation is carried out on the ROI of the first lung feature map to obtain a second lung feature map; step S230, fusing the first lung characteristic map and the second lung characteristic map; and step S240, determining the pneumoconiosis grade corresponding to the lung image based on the fusion result.
For example, the processing unit 810 may further perform the pneumoconiosis classification method based on deep learning in the embodiment of the above manner.
The storage unit 820 may include readable media in the form of volatile memory units such as a random access memory unit (RAM)8201 and/or a cache memory unit 8202, and may further include a read only memory unit (ROM) 8203.
The storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 890 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 800, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 850, which may be coupled to display unit 840. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. As shown, the network adapter 860 communicates with the other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A pneumoconiosis classification method based on deep learning is characterized by comprising the following steps:
performing feature extraction on a lung image of a patient to generate a corresponding first lung feature map, wherein the first lung feature map comprises a plurality of interested regions ROI;
performing mask prediction segmentation on the ROI of the first lung feature map to obtain a second lung feature map;
fusing the first lung feature map with the second lung feature map;
and determining the pneumoconiosis grade corresponding to the lung image based on the fusion result.
2. The method of claim 1, wherein the masked predictive segmentation of the ROI of the first lung feature map comprises:
performing mask prediction segmentation on each ROI in the first lung feature map through a full convolution network to obtain a plurality of mask regions;
determining whether each pixel in the mask region is greater than or equal to a predetermined threshold;
if the pixel is greater than or equal to the predetermined threshold, determining that the pixel belongs to a lung region.
3. The method of claim 2, wherein fusing the first and second lung profiles comprises:
determining weights for the respective mask regions of the second lung feature map;
based on the weights, a feature of the first lung feature map is weight fused with a corresponding feature of the second lung feature map.
4. The method of claim 1, wherein determining the pneumoconiosis grade corresponding to the lung image based on the fusion result comprises:
obtaining a fusion lung feature map of the lung image based on the fusion result;
and inputting the fused lung feature map into a classification network, and determining the pneumoconiosis grade of each lung region of the lung image through the classification network.
5. The method of claim 1, further comprising:
dividing the ROI of the first lung feature map into a plurality of feature regions;
and carrying out normalization processing on each feature region to obtain the normalized first lung feature map.
6. The method according to any one of claims 1 to 5, wherein the feature extracting of the lung image of the patient, generating a corresponding first lung feature map, comprises:
obtaining a feature pyramid of the lung image of the patient, the feature pyramid comprising a plurality of first feature layers;
performing upsampling processing on each first feature layer of the feature pyramid to generate a corresponding second feature layer;
and carrying out fusion processing on the first characteristic layer and the second characteristic layer to generate a corresponding first lung characteristic map.
7. The method of claim 1, further comprising:
preprocessing an image of the patient's lungs, the preprocessing including rib tissue removal processing and image enhancement processing.
8. A pneumoconiosis classification device based on deep learning, comprising:
the feature extraction module is used for performing feature extraction on the lung image of the patient to generate a corresponding first lung feature map, wherein the first lung feature map comprises a plurality of interested regions ROI;
the mask segmentation module is used for performing mask prediction segmentation on the ROI of the first lung feature map to obtain a second lung feature map;
a feature fusion module for fusing the first lung feature map and the second lung feature map;
and the classification module is used for determining the pneumoconiosis grade corresponding to the lung image based on the fusion result.
9. A computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method for deep learning based pneumoconiosis classification according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
a storage device to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the deep learning based pneumoconiosis classification method of any one of claims 1 to 7.
CN202110218806.8A 2021-02-26 2021-02-26 Pneumoconiosis grading method, device, medium and equipment based on deep learning Pending CN112819819A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110218806.8A CN112819819A (en) 2021-02-26 2021-02-26 Pneumoconiosis grading method, device, medium and equipment based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110218806.8A CN112819819A (en) 2021-02-26 2021-02-26 Pneumoconiosis grading method, device, medium and equipment based on deep learning

Publications (1)

Publication Number Publication Date
CN112819819A true CN112819819A (en) 2021-05-18

Family

ID=75864145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110218806.8A Pending CN112819819A (en) 2021-02-26 2021-02-26 Pneumoconiosis grading method, device, medium and equipment based on deep learning

Country Status (1)

Country Link
CN (1) CN112819819A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409306A (en) * 2021-07-15 2021-09-17 推想医疗科技股份有限公司 Detection device, training method, training device, equipment and medium
CN113610785A (en) * 2021-07-26 2021-11-05 安徽理工大学 Pneumoconiosis early warning method and device based on intelligent image and storage medium
CN114219807A (en) * 2022-02-22 2022-03-22 成都爱迦飞诗特科技有限公司 Mammary gland ultrasonic examination image grading method, device, equipment and storage medium
CN114463278A (en) * 2022-01-10 2022-05-10 东莞理工学院 Lung area pneumoconiosis staging system based on deep learning and digital image combination

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180240235A1 (en) * 2017-02-23 2018-08-23 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
CN110310281A (en) * 2019-07-10 2019-10-08 重庆邮电大学 Lung neoplasm detection and dividing method in a kind of Virtual Medical based on Mask-RCNN deep learning
CN110599448A (en) * 2019-07-31 2019-12-20 浙江工业大学 Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN111798437A (en) * 2020-07-09 2020-10-20 兴义民族师范学院 Novel coronavirus pneumonia AI rapid diagnosis method based on CT image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180240235A1 (en) * 2017-02-23 2018-08-23 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
CN110310281A (en) * 2019-07-10 2019-10-08 重庆邮电大学 Lung neoplasm detection and dividing method in a kind of Virtual Medical based on Mask-RCNN deep learning
CN110599448A (en) * 2019-07-31 2019-12-20 浙江工业大学 Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN111798437A (en) * 2020-07-09 2020-10-20 兴义民族师范学院 Novel coronavirus pneumonia AI rapid diagnosis method based on CT image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈云霁 等编著: "智能计算系统", 机械工业出版社, pages: 74 - 79 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409306A (en) * 2021-07-15 2021-09-17 推想医疗科技股份有限公司 Detection device, training method, training device, equipment and medium
CN113610785A (en) * 2021-07-26 2021-11-05 安徽理工大学 Pneumoconiosis early warning method and device based on intelligent image and storage medium
CN114463278A (en) * 2022-01-10 2022-05-10 东莞理工学院 Lung area pneumoconiosis staging system based on deep learning and digital image combination
CN114463278B (en) * 2022-01-10 2024-05-28 东莞理工学院 Deep learning and digital image combination-based pneumoconiosis stage system for lung region
CN114219807A (en) * 2022-02-22 2022-03-22 成都爱迦飞诗特科技有限公司 Mammary gland ultrasonic examination image grading method, device, equipment and storage medium
CN114219807B (en) * 2022-02-22 2022-07-12 成都爱迦飞诗特科技有限公司 Mammary gland ultrasonic examination image grading method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112819819A (en) Pneumoconiosis grading method, device, medium and equipment based on deep learning
CN108319605B (en) Structured processing method and system for medical examination data
CN112906502A (en) Training method, device and equipment of target detection model and storage medium
CN107274402A (en) A kind of Lung neoplasm automatic testing method and system based on chest CT image
CN109934229B (en) Image processing method, device, medium and computing equipment
CN109191451B (en) Abnormality detection method, apparatus, device, and medium
EP4161391A1 (en) Systems and methods for automated analysis of medical images
CN114219807B (en) Mammary gland ultrasonic examination image grading method, device, equipment and storage medium
US20230133218A1 (en) Image segmentation method, device and medium
CN115984622B (en) Multi-mode and multi-example learning classification method, prediction method and related device
CN110827236A (en) Neural network-based brain tissue layering method and device, and computer equipment
CN113989293A (en) Image segmentation method and training method, device and equipment of related model
WO2021139351A1 (en) Image segmentation method, apparatus, medium, and electronic device
US11574140B2 (en) Systems and methods to process electronic images to determine salient information in digital pathology
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
CN113724185A (en) Model processing method and device for image classification and storage medium
US20230334698A1 (en) Methods and systems for positioning in an medical procedure
CN111415333A (en) Training method and device for breast X-ray image antisymmetric generation analysis model
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
CN113537148B (en) Human body action recognition method and device, readable storage medium and electronic equipment
CN115631152A (en) Ultrasonic image interception method and device, electronic equipment and storage medium
Fuhrman et al. Detection and classification of coronary artery calcifications in low dose thoracic CT using deep learning
CN113822846A (en) Method, apparatus, device and medium for determining region of interest in medical image
CN112131400A (en) Construction method of medical knowledge map for assisting outpatient assistant
Kovalev et al. Automatic detection of pathological changes in chest X-ray screening images using deep learning methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination