CN112241961A - Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network - Google Patents

Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network Download PDF

Info

Publication number
CN112241961A
CN112241961A CN202011000526.1A CN202011000526A CN112241961A CN 112241961 A CN112241961 A CN 112241961A CN 202011000526 A CN202011000526 A CN 202011000526A CN 112241961 A CN112241961 A CN 112241961A
Authority
CN
China
Prior art keywords
chest
image
neural network
convolutional neural
deep convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011000526.1A
Other languages
Chinese (zh)
Inventor
陈浩
肖永杰
胡福岗
王春永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Imsight Medical Technology Co Ltd
Original Assignee
Shenzhen Imsight Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Imsight Medical Technology Co Ltd filed Critical Shenzhen Imsight Medical Technology Co Ltd
Priority to CN202011000526.1A priority Critical patent/CN112241961A/en
Publication of CN112241961A publication Critical patent/CN112241961A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention relates to the technical field of medical treatment, in particular to a chest X-ray film auxiliary diagnosis method and a chest X-ray film auxiliary diagnosis system based on a deep convolutional neural network, wherein the method comprises the following steps: preprocessing the X-ray film of the chest to obtain an X-ray film initial image meeting the format requirement; screening the X-ray initial image to detect whether the image is a chest righting image; inputting the chest positive image into a two-classification model of a deep convolutional neural network for positive and negative classification; inputting the image with the positive result into a detection model of a deep convolutional neural network to detect the disease type and carrying out contour labeling on a focus area in the image; and displaying the disease type and the focus area corresponding to the image. The chest X-ray film auxiliary diagnosis method based on the deep convolutional neural network provided by the embodiment of the invention can screen the positivity and negativity of the X-ray chest film, can position the focus area, and can mark the disease type or symptom of the focus area, thereby providing more interpretable reference opinions for doctors.

Description

Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network
Technical Field
The invention relates to the technical field of medical treatment, in particular to a chest X-ray film auxiliary diagnosis method and system based on a deep convolutional neural network.
Background
Chest X-rays are a common tool for examining or diagnosing chest diseases. In the third hospital, the number of X-ray films generated every day is very large, so that on one hand, a doctor can gradually accumulate fatigue in a long-time film reading process, and further a diagnosis error condition can possibly occur; on the other hand, in small hospitals in cities and towns, although the daily average radiographing amount is small, the experience of the radiographing doctor is insufficient, and a diagnosis error may occur. There is a need for an auxiliary diagnostic tool that can help physicians reduce misdiagnosis or provide diagnostic information.
With the development of deep learning, more and more deep convolutional neural network algorithm technologies are applied to medical images, and an AI intelligent auxiliary diagnosis system is developed at the same time.
Most of the existing AI intelligent auxiliary diagnosis methods utilize an internationally published data set (such as ChestXray14) or reported data collected from hospitals, acquire a sheet-level disease label from a report through natural language processing, train a multi-label classification network model or a plurality of single disease classification models, predict chest X-ray films, and convert disease characteristic information learned by a deep convolutional neural network model into a thermodynamic diagram by adopting a class activation mapping (or weighted gradient class activation mapping) method, so that the approximate focus area of a disease can be seen on the thermodynamic diagram, thereby further assisting the diagnosis of doctors.
Firstly, the quality of the film in the international public data set is uneven, and the film with poor quality, such as overexposure, incorrect position, poor imaging and the like exists; in addition, international published data sets or data collected by hospitals, the labels of the data sets are mostly extracted from diagnosis reports, and the extraction method cannot guarantee that the labels are hundreds of correct; deep convolutional neural networks trained with this data can deviate significantly from the results of actual physician diagnosis.
Secondly, the prediction result of the above method is either a 2-point result, or only a suspicious lesion area can be marked without a disease type or a symptom corresponding to the suspicious lesion area, or only the suspicious lesion area can be prompted, but the corresponding lesion area cannot be given, that is, the prior art is basically based on whole-piece disease classification and is not specifically positioned at the position of the disease, even if the class activation mapping technology is adopted to realize the weak supervision semantic segmentation capability, the obtained lesion area information is often not accurate enough, and the false positive is very high.
Since the clinician needs to accurately write the specific location of the disease in the diagnosis report, the prior art method is not compatible with the clinical diagnosis, and it is difficult to provide effective help in assisting the diagnosis of the clinician.
Disclosure of Invention
In view of the above technical problems, embodiments of the present invention provide a chest X-ray film aided diagnosis method and system based on a deep convolutional neural network, so as to solve the technical problems that an AI intelligence aided diagnosis method provided by a conventional algorithm cannot provide a disease type corresponding to a focus area, and is not interpretable.
The embodiment of the invention provides a chest X-ray film auxiliary diagnosis method based on a deep convolutional neural network in a first aspect, which is characterized by comprising the following steps: preprocessing the X-ray film of the chest to obtain an X-ray film initial image meeting the format requirement; screening the X-ray initial image to detect whether the X-ray initial image is a chest orthostatic image; inputting the chest positive image into a two-classification model of a deep convolutional neural network for positive and negative classification; inputting the chest orthostatic image with a positive result into a detection model of a deep convolutional neural network to detect the disease type of the chest orthostatic image and carrying out contour labeling on a focus area in the chest orthostatic image; and displaying the disease type and the focus area corresponding to the chest normal position image.
Optionally, the preprocessing the chest X-ray film to obtain an X-ray film initial image meeting the format requirement includes: mapping all pixel values of the chest X-ray film to normal distribution to obtain window width and window level; and removing noise pixel points outside the window width interval, and mapping the removed pixels to an interval range of 0-255 to obtain an X-ray film initial image.
Optionally, the screening of the X-ray film initial image specifically includes: inputting the X-ray initial image into a chest orthostatic screening model for screening, wherein the chest orthostatic screening model comprises: a Resnet-34 feature extraction network and 2 fully-connected neural networks, wherein the Resnet-34 feature extraction network is used for performing chest feature extraction on the X-ray initial image; the first full-connection neural network is used for judging whether the chest characteristic is chest orthostatic; a second fully connected neural network is used to confirm the photometric interpretation of the thoracic feature.
Optionally, before inputting the chest ortho image into the binary model of the deep convolutional neural network, the method further comprises: if the photometric interpretation of the chest feature is a gray scale ranging from light to dark rising pixel values, the pixels of the X-ray film original image are processed to obtain a photometric interpretation as gray scale ranging from dark to light rising pixel values.
Optionally, the two-classification model of the deep convolutional neural network is used for performing chest feature extraction on the X-ray initial image and performing negative-positive classification on the extracted chest feature.
Optionally, the detection model of the deep convolutional neural network includes: the system comprises a feature extraction network, a feature fusion network, a region generation network, a fixer, a locator and a divider, wherein the output of the feature extraction network is the input of the feature fusion network; the output of the feature fusion network is the input of the area generation network; the output of the region generation network is the input of the sexer, which is used to detect the disease type of the chest orthophoto image; the region generation network output is an input of the locator, and the locator is used for locating the focus region; the locator input is the output of the segmenter, which is used to label the contour of the lesion area.
Optionally, the method further comprises: and when the classification result in the two classification models of the deep convolutional neural network is positive, but the disease confidence degrees output by the detection models of the deep convolutional neural network are all smaller than a set threshold value, the detection models of the deep convolutional neural network forcibly output the outline of the focal zone and the disease type corresponding to the maximum confidence degree.
Optionally, the method further comprises: and visually displaying the focus area and the disease type corresponding to the focus area in a chest disease report.
Optionally, the binary model of the deep convolutional neural network and the training set of the detection model of the deep convolutional neural network are both from an image archiving and communication system.
In a second aspect, an embodiment of the present invention provides a deep convolutional neural network-based chest X-ray film aided diagnosis system, including: the preprocessing module is used for preprocessing the chest X-ray film to obtain an X-ray film initial image meeting the format requirement; a chest disease detection module, the chest disease detection module comprising: the breast orthophoria screening unit, the negative and positive classification unit and the focus area positioning and qualitative unit; the chest orthostatic screening unit is used for screening the X-ray initial image and detecting whether the X-ray initial image is a chest orthostatic image; the negative and positive classification unit is used for inputting the chest positive image into a two-classification model of the deep convolutional neural network for negative and positive classification; the focus area positioning and qualitative unit is used for inputting the chest orthostatic image with a positive result into a detection model of a deep convolutional neural network to detect the disease type of the chest orthostatic image and perform contour labeling on a focus area in the chest orthostatic image; and the display module is used for displaying the disease type corresponding to the chest normal position image and the visualized lesion area.
The chest X-ray film auxiliary diagnosis method and system based on the deep convolutional neural network provided by the embodiment of the invention can screen the positivity and negativity of the X-ray chest film, can locate the focus area, and can mark the disease type or symptom of the focus area, thereby providing more interpretable reference opinions for doctors, improving the report output efficiency of the doctors and reducing the workload.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic flowchart of a chest X-ray film aided diagnosis method based on a deep convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a structural frame diagram of a chest orthostatic screening model provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a deep convolutional neural network-based chest orthotopic screening process provided by an embodiment of the present invention;
FIG. 4 is a structural framework diagram of a two-class model of a deep convolutional neural network provided by an embodiment of the present invention;
FIG. 5 is a block diagram of a detection model of a deep convolutional neural network provided by an embodiment of the present invention;
fig. 6 is a structural block diagram of a chest X-ray film aided diagnosis system based on a deep convolutional neural network according to an embodiment of the present invention.
Detailed Description
In order to facilitate an understanding of the invention, the invention is described in more detail below with reference to the accompanying drawings and specific examples. It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may be present. As used in this specification, the terms "upper," "lower," "inner," "outer," "bottom," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the invention and simplicity in description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and are not to be considered limiting of the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The invention provides an intelligent auxiliary diagnosis method and system capable of quickly screening negative and positive X-ray films on the chest and accurately positioning a focus area on the chest, the method and the system have the main functions that firstly, a two-classification model of a deep convolutional neural network is used for quickly screening a positive film to achieve the effect of quickly discharging negative, and then, a detection model of another deep convolutional neural network is used for accurately positioning the focus area in the positive film and predicting which disease the focus area belongs to, so that interpretable reference opinions are more effectively provided for a doctor, the working efficiency of the doctor can be greatly improved, and the workload of the doctor is reduced.
The following first details a specific implementation process of the method, and a training process of a binary model of the deep convolutional neural network and a detection model of the deep convolutional neural network.
Referring to fig. 1, fig. 1 is a chest X-ray film aided diagnosis method based on a deep convolutional neural network according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
101. preprocessing the chest X-ray film to obtain an X-ray film initial image meeting the format requirement.
This step is to convert the chest X-ray film to the format required by the next several models.
Specifically, the method firstly maps all pixel values on an X-ray film to a normal distribution, and takes the width of a pixel interval (mu-3 sigma, mu +3 sigma) as a window width (sigma is a standard deviation, mu is a mean value) and the center of the interval as a window level; then, removing the pixel points with the noise of about 0.3 percent outside the window width interval, and mapping the pixel with the noise removed to the range of 0 to 255; and finally, scaling the X-ray film to a corresponding size according to the input requirements of different models.
In the present invention, the input dimensional requirements for the following breast orthostatic screening model for breast orthostatic film screening are: 256 × 256; the following input size requirements for the binary model of the deep convolutional neural network for negative and positive breast screening are: 256 × 256.
102. And screening the X-ray initial image to detect whether the X-ray initial image is a chest orthostatic image.
Screening the X-ray initial image, specifically: inputting the X-ray initial image into a chest orthostatic screening model for screening. In the embodiment of the present invention, the main function of the chest positive screening model is to determine whether the inputted X-ray original image is a chest positive film, and whether the photometric interpretation is that the gray scale ranges from light to dark rising pixel values (MONOCHROME1) or from dark to light rising pixel values (MONOCHROME 2). the structural framework of the model is as shown in fig. 2.
Since the method of the present invention is only applicable to breast positive position films, the breast needs to be screened for positive position. In the actual operation process, due to the operation errors of doctors or shooting staff, the photometrically interpreted information can be wrongly entered, so that the imaging is completely opposite. In actual operation, a doctor can manually adjust the MONOCHROME1 to the MONOCHROME2 during reading to save the misoperation, but the method of the invention cannot read the photometric interpretation attribute information of the data header information, so that the chest normal position screening model adds photometric interpretation to automatically judge whether the information entry of the photometric interpretation is wrong.
Specifically, the chest orthostatic screening model comprises: the Resnet-34 feature extraction network and the 2 full-connection neural networks are used for carrying out chest feature extraction on the X-ray initial image; the first full-connection neural network is used for judging whether the chest characteristic is chest orthostatic; a second fully connected neural network is used to confirm the photometric interpretation of the thoracic feature.
The dimensional requirements for breast positive slice screening are obtained in step 101 as: 256 × 256 picture data, and the process of inputting the 256 × 256 picture data into the chest orthostatic screening model is as follows:
the 256 × 256 picture data is first copied twice, and then overlapped into a 3 × 256 × 256 structure and converted into a 1 × 3 × 256 × 256 structure.
Then inputting the 1 × 3 × 256 × 256 structure picture data into a Resnet-34 feature extraction network for network extraction to obtain a 1 × 1024 × 8 × 8 feature map,
the method comprises the steps of processing a 1 × 1024 × 8 × 8 feature map by an averaging pooling layer (averaging porous layer) to obtain a 1 × 1024 × 1 × 1 vector, converting the vector into a 1 × 1024 structure, respectively inputting the vector into a first fully-connected neural network and a second fully-connected neural network with the number of two output channels being 1, and finally processing the obtained 2 values by a sigmoid activation function layer.
And finally obtaining two probability values with the threshold value ranging from 0 to 1, wherein the probability value corresponding to the first fully-connected neural network represents the probability that the input picture data is the chest positive slice. If the probability value is greater than or equal to 0.5 by default, it indicates that the picture data is a chest positive slice, otherwise, it is a non-chest positive slice, and the screening result is shown in fig. 3.
The probability value corresponding to the second fully-connected neural network represents the probability that the photometric interpretation of the input picture data is MONOCHROME 2. By default, if the value is greater than or equal to 0.5, it indicates that the photometric interpretation of the picture data is MONOCHROME2, otherwise it is MONOCHROME 1.
When the photometric interpretation is MONOCHROME1, the input picture data needs to be converted into: the opposite of each value is added to 255 to adjust MONOCHROME1 to MONOCHROME 2.
It should be noted that the chest orthostatic screening model, the following binary classification model of the deep convolutional neural network for chest negative and positive screening, and the following detection model of the deep convolutional neural network for chest lesion area localization and qualification are obtained by training data in accordance with a clinical real environment.
The training data is produced by the following steps:
1. collecting chest X-ray images and corresponding diagnosis reports from a hospital;
2. desensitizing and controlling the quality of the collected chest X-ray images;
3. analyzing the image in advance based on the diagnosis report;
4. examining and marking the chest X-ray image by a doctor;
5. and (5) the algorithm personnel review.
The invention collects chest X-ray images and corresponding diagnosis reports from hospitals, which acquires chest X-ray image data from PACS (Picture Archiving and Communication Systems) or DR and CR equipment of hospitals through DICOM protocol and collects the films and diagnosis reports thereof according to strict search conditions (such as table 1).
TABLE 1
Figure BDA0002694136360000071
Figure BDA0002694136360000081
Desensitizing and controlling the collected chest X-ray images, and eliminating sensitive information such as patient names, detection mechanism names, doctor names and the like; then primarily screening out chest positive position films from the collected chest X-ray images, thereby removing a large number of side films and non-chest X-ray films; and through a manual quality control auditing mode, missing non-chest positive position films and other films with poor quality, such as overexposure, poor imaging, bedside films, infant films and other image data which do not meet the training requirements, are further removed.
When the X-ray image is analyzed in advance based on the diagnosis report, because the quantity of data collected from a hospital is large, most of the data are negative pieces (even if positive pieces are selected during operation), all the data cannot be taken out to be labeled for doctors, the invention provides a method for extracting keywords from the diagnosis report during training data, when the disease keywords and the treated keywords exist simultaneously, the patient is considered to have no disease, so that the positive data are rapidly selected, and a certain quantity of the keywords are respectively selected for labeling according to different disease categories according to specific requirements, so that the labeling cost can be greatly reduced.
When the doctor marks and examines, the doctor can mark the X-ray image film and draw out the focus area in a polygonal form. And then, the marks are audited by doctors of high-age resources, whether the marks are wrongly marked or missed is mainly audited for the low-age resources, if the marks are missed, the marks are supplemented, and if the marks are wrongly marked, the marks are corrected.
The re-examination of the algorithm personnel is because the algorithm personnel have professional experience of training the deep convolutional neural network, and only rigorous and accurate labeled data can train an accurate network model. Therefore, the algorithm personnel can review the data marked by the doctors, and the main review content is the rigor degree of the marked polygons, for example, some polygons related to pleural effusion are too large and cover the background area outside the lung, and need to correct the polygons.
The training data manufactured by the method is close to the clinical real environment, and the data quality is high.
103. And inputting the chest positive image into a binary classification model of a deep convolutional neural network for positive and negative classification.
In the embodiment of the invention, the main function of the binary classification model of the deep convolutional neural network is to rapidly analyze the positive and negative of the chest X-ray film, and the structural framework of the model is shown in FIG. 4.
The dimensional requirements for breast positive slice screening are obtained in step 101 as: 256 × 256 picture data, and the process of inputting the 256 × 256 picture data into the binary model of the deep convolutional neural network for processing is as follows:
the 256 × 256 picture data is first copied twice, and then overlapped into a 3 × 256 × 256 structure and converted into a 1 × 3 × 256 × 256 structure.
Then inputting the 1 × 3 × 256 × 256 structure picture data into a feature extraction network of the densenert-121 to perform network extraction to obtain a 1 × 1024 × 8 × 8 feature map,
the method comprises the steps of processing a 1 × 1024 × 8 × 8 feature map through an average pooling layer to obtain a 1 × 1024 × 1 × 1 vector, converting the vector into a 1 × 1024 structure, inputting the 1 × 1024 vector into a fully-connected neural network with 1 channel number, and finally processing the obtained value through a sigmoid activation function.
A probability value with a threshold value ranging from 0 to 1 is finally obtained, which indicates the probability that the input picture data is a positive piece. If the probability value is greater than or equal to 0.5 by default, the picture data is a positive slice, otherwise, the picture data is a negative slice.
104. And inputting the chest orthostatic image with a positive result into a detection model of a deep convolutional neural network to detect the disease type of the chest orthostatic image and carrying out contour labeling on a focus area in the chest orthostatic image.
In the embodiment of the invention, the detection model of the deep convolutional neural network has the main functions of locating specific focus areas in the positive plate and carrying out specific disease type qualitative analysis on each located area, wherein the disease types of the breast focus areas comprise the following 17 types: atelectasis, enlarged cardiac silhouette, pleural effusion, infiltration, mass, nodules, pneumonia, pneumothorax, lung consolidation, pulmonary edema, emphysema, pulmonary fibrosis, pleural thickening, diaphragmatic hernia, tuberculosis, rib fracture, and aortic calcification.
The structural framework of the detection model of the deep convolutional neural network is shown in fig. 5. The method mainly comprises the following steps: the system comprises a feature extraction network, a feature fusion network, a region generation network, a fixer, a locator and a segmenter.
The feature extraction network adopts an efficientNet-b2 structure, and the feature fusion network adopts a bidirectional feature pyramid fusion structure, so that more representative features can be extracted more effectively, information of shallow-high semantic features can be fused better, and a foundation is provided for subsequent positioning and shaping.
The regional generation network is composed of a convolution layer and two independent convolution layers, wherein the first convolution layer is used for preliminarily analyzing the characteristic information of input picture data and buffering the direct influence of a backward propagation gradient on a trunk line during training; two separate convolutional layers, one for predicting a candidate box for a lesion region and the other for predicting a probability value that the candidate box is a lesion region.
The qualitative device is used for qualitatively analyzing the candidate frames output by the regional production network, analyzing whether the candidate frames are ill or not, and predicting which of the 17 diseases or symptoms the candidate frames belong to if the candidate frames are ill; the locator further optimizes the candidate frame output by the area generation network, thereby outputting a more accurate locating frame; the divider is used for dividing the focus area optimized by the positioner so as to find out the accurate contour of the focus area.
The dimensional requirements for breast positive slice screening are obtained in step 101 as: 640 × 640 picture data, the process of inputting the 640 × 640 picture data to the detection model of the deep convolutional neural network for processing is as follows:
after a picture with a structure of 1 multiplied by 3 multiplied by 768 is taken as input data and passes through a feature extraction network and a feature fusion network, 1000 focus area candidate frames are generated by an area generation network; then combining the 1000 lesion area candidate frames and the corresponding feature maps thereof, scaling the feature maps into 1000 × 256 × 7 × 7 structure feature maps by bilinear difference method, inputting the feature maps into a qualitative device and a locator, analyzing each candidate frame and outputting 18 categories of confidence (17 categories of diseases or symptoms +1 categories of no diseases) and corresponding categories of refinement frames (18 refinement frames in total), then selecting the category with the highest confidence and the refinement frame corresponding to the category as output, and finally segmenting the contour of the lesion area by the divider (the coordinate information of the contour is finally scaled back to the size of 768 × 768 of the original drawing).
After the uneven walking detection is finished, output information of the chest positive position sheet screening model, the two classification models of the deep convolutional neural network and the detection model of the deep convolutional neural network is required to be sorted and output. In some embodiments, if the classification result in the two classification models of the deep convolutional neural network is positive, but the confidence degrees of the diseases output by the detection models of the deep convolutional neural network are all smaller than the set threshold, in the post-processing stage, the detection models of the deep convolutional neural network force to output the disease type corresponding to the maximum confidence degree and the outline of the labeled focal zone.
105. And displaying the disease type corresponding to the chest normal position image and the visualized lesion area.
The outline of the focus area is sketched in the chest orthostatic image, the disease name or the symptom corresponding to the outline is marked, and meanwhile, a thermodynamic diagram can be made according to the suspicious probability of the focus area for diagnosis reference of a doctor.
In some embodiments, the lesion area and the corresponding disease type for the lesion area may also be visualized in a chest disease report.
The chest X-ray film auxiliary diagnosis method based on the deep convolutional neural network provided by the embodiment of the invention can screen the positivity and negativity of the X-ray chest film, can position the focus area, and can mark the disease type or symptom of the focus area, thereby providing more interpretable reference opinions for doctors, improving the report efficiency of the doctors and reducing the workload.
The embodiment of the present invention further provides a chest X-ray film auxiliary diagnosis system 60 based on a deep convolutional neural network, as shown in fig. 6, including: a preprocessing module 61, a chest disease detection module 62 and a display module 63.
The preprocessing module 61 is used for preprocessing the chest X-ray film to obtain an X-ray film initial image meeting the format requirement;
the chest disease detection module 62 includes: a chest orthostatic screening unit 621, a negative-positive classification unit 622, and a lesion area localization qualitative unit 623.
The chest orthostatic screening unit 621 is configured to screen the X-ray initial image, and detect whether the X-ray initial image is a chest orthostatic image; the negative-positive classification unit 622 is used for inputting the chest positive image into a two-classification model of the deep convolutional neural network for negative-positive classification; the lesion area locating and qualifying unit 623 is configured to input the chest ortho image with a positive result into a detection model of a deep convolutional neural network to detect a disease type of the chest ortho image and perform contour labeling on a lesion area in the chest ortho image.
The display module 63 is configured to display the disease type and the visualized lesion area corresponding to the chest ortho image.
In some embodiments, the deep convolutional neural network-based chest X-ray aided diagnosis system 60 further comprises: and the post-processing module 64, wherein the post-processing module 64 is configured to, if the classification result in the two classification models of the deep convolutional neural network is positive, but the disease confidence degrees output by the detection models of the deep convolutional neural network are all smaller than a set threshold, force the detection models of the deep convolutional neural network to output the disease type corresponding to the maximum confidence degree and the outline of the labeled lesion region.
The specific applications of the preprocessing module 61, the chest orthotopic screening unit 621, the negative-positive classification unit 622, the lesion area localization qualitative unit 623, and the post-processing module 64 are the same as those of the chest X-ray film aided diagnosis method based on the deep convolutional neural network in the above embodiments, and are not repeated herein.
The chest X-ray film auxiliary diagnosis system based on the deep convolutional neural network provided by the embodiment of the invention can screen the positivity and negativity of the X-ray chest film, can position the focus area, and can mark the disease type or symptom of the focus area, thereby providing more interpretable reference opinions for doctors, improving the report efficiency of the doctors and reducing the workload.
It will be further appreciated by those of skill in the art that the various steps of the exemplary dual light image integration methods described in connection with the embodiments disclosed herein can be embodied in electronic hardware, computer software, or combinations of both, and that the various exemplary components and steps have been described generally in terms of their functionality in the foregoing description for clarity of illustration of interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation.
Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The computer software may be stored in a computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A chest X-ray film auxiliary diagnosis method based on a deep convolutional neural network is characterized by comprising the following steps:
preprocessing the X-ray film of the chest to obtain an X-ray film initial image meeting the format requirement;
screening the X-ray initial image to detect whether the X-ray initial image is a chest orthostatic image;
inputting the chest positive image into a two-classification model of a deep convolutional neural network for positive and negative classification;
inputting the chest orthostatic image with a positive result into a detection model of a deep convolutional neural network to detect the disease type of the chest orthostatic image and carrying out contour labeling on a focus area in the chest orthostatic image;
and displaying the disease type and the focus area corresponding to the chest normal position image.
2. The method of claim 1, wherein the preprocessing the chest X-ray film to obtain an initial image of the chest X-ray film meeting format requirements comprises:
mapping all pixel values of the chest X-ray film to normal distribution to obtain window width and window level;
and removing noise pixel points outside the window width interval, and mapping the removed pixels to an interval range of 0-255 to obtain an X-ray film initial image.
3. The method according to claim 2, wherein the screening of the X-ray film initial image is performed by: inputting the X-ray initial image into a chest orthostatic screening model for screening, wherein the chest orthostatic screening model comprises: the Resnet-34 feature extraction network and 2 fully-connected neural networks,
the Resnet-34 feature extraction network is used for performing chest feature extraction on the X-ray film initial image;
the first full-connection neural network is used for judging whether the chest characteristic is chest orthostatic;
a second fully connected neural network is used to confirm the photometric interpretation of the thoracic feature.
4. The method of claim 3, wherein prior to inputting the chest ortho image into the binary model of the deep convolutional neural network, the method further comprises:
if the photometric interpretation of the chest feature is a gray scale ranging from light to dark rising pixel values, the pixels of the X-ray film original image are processed to obtain a photometric interpretation as gray scale ranging from dark to light rising pixel values.
5. The method of claim 4, wherein the binary classification model of the deep convolutional neural network is used for performing chest feature extraction on the X-ray film initial image and performing negative-positive classification on the extracted chest feature.
6. The method of claim 5, wherein the detection model of the deep convolutional neural network comprises: a feature extraction network, a feature fusion network, a region generation network, a fixer, a locator and a segmenter,
the output of the feature extraction network is the input of the feature fusion network;
the output of the feature fusion network is the input of the area generation network;
the output of the region generation network is the input of the sexer, which is used to detect the disease type of the chest orthophoto image;
the region generation network output is an input of the locator, and the locator is used for locating the focus region;
the locator input is the output of the segmenter, which is used to label the contour of the lesion area.
7. The method of claim 6, further comprising:
when the classification result in the binary classification model of the deep convolutional neural network is positive, but the confidence of the diseases output by the detection model of the deep convolutional neural network is less than a set threshold,
and the detection model of the deep convolutional neural network forces to output the outline of the focal zone and the disease type corresponding to the maximum confidence coefficient.
8. The method of claim 7, further comprising:
and visually displaying the focus area and the disease type corresponding to the focus area in a chest disease report.
9. The method of claim 1, wherein the binary model of the deep convolutional neural network and the training set of the detection model of the deep convolutional neural network are both from a video archiving and communication system.
10. A deep convolutional neural network-based chest X-ray film aided diagnosis system is characterized by comprising:
the preprocessing module is used for preprocessing the chest X-ray film to obtain an X-ray film initial image meeting the format requirement;
a chest disease detection module, the chest disease detection module comprising: the breast orthophoria screening unit, the negative and positive classification unit and the focus area positioning and qualitative unit;
the chest orthostatic screening unit is used for screening the X-ray initial image and detecting whether the X-ray initial image is a chest orthostatic image;
the negative and positive classification unit is used for inputting the chest positive image into a two-classification model of the deep convolutional neural network for negative and positive classification;
the focus area positioning and qualitative unit is used for inputting the chest orthostatic image with a positive result into a detection model of a deep convolutional neural network to detect the disease type of the chest orthostatic image and perform contour labeling on a focus area in the chest orthostatic image;
and the display module is used for displaying the disease type corresponding to the chest normal position image and the visualized lesion area.
CN202011000526.1A 2020-09-22 2020-09-22 Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network Pending CN112241961A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011000526.1A CN112241961A (en) 2020-09-22 2020-09-22 Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011000526.1A CN112241961A (en) 2020-09-22 2020-09-22 Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network

Publications (1)

Publication Number Publication Date
CN112241961A true CN112241961A (en) 2021-01-19

Family

ID=74171632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011000526.1A Pending CN112241961A (en) 2020-09-22 2020-09-22 Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN112241961A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113053519A (en) * 2021-04-02 2021-06-29 北京掌引医疗科技有限公司 Training method, device and equipment of tuberculosis detection model based on genetic algorithm
CN113053520A (en) * 2021-04-02 2021-06-29 北京掌引医疗科技有限公司 Training method and device for tuberculosis detection model and auxiliary diagnosis equipment
CN113076993A (en) * 2021-03-31 2021-07-06 零氪智慧医疗科技(天津)有限公司 Information processing method and model training method for chest X-ray film recognition
CN116452579A (en) * 2023-06-01 2023-07-18 中国医学科学院阜外医院 Chest radiography image-based pulmonary artery high pressure intelligent assessment method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076993A (en) * 2021-03-31 2021-07-06 零氪智慧医疗科技(天津)有限公司 Information processing method and model training method for chest X-ray film recognition
CN113053519A (en) * 2021-04-02 2021-06-29 北京掌引医疗科技有限公司 Training method, device and equipment of tuberculosis detection model based on genetic algorithm
CN113053520A (en) * 2021-04-02 2021-06-29 北京掌引医疗科技有限公司 Training method and device for tuberculosis detection model and auxiliary diagnosis equipment
CN116452579A (en) * 2023-06-01 2023-07-18 中国医学科学院阜外医院 Chest radiography image-based pulmonary artery high pressure intelligent assessment method and system
CN116452579B (en) * 2023-06-01 2023-12-08 中国医学科学院阜外医院 Chest radiography image-based pulmonary artery high pressure intelligent assessment method and system

Similar Documents

Publication Publication Date Title
CN112699868A (en) Image identification method and device based on deep convolutional neural network
US10755413B1 (en) Method and system for medical imaging evaluation
US11403483B2 (en) Dynamic self-learning medical image method and system
US10691980B1 (en) Multi-task learning for chest X-ray abnormality classification
CN112241961A (en) Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network
US7529394B2 (en) CAD (computer-aided decision) support for medical imaging using machine learning to adapt CAD process with knowledge collected during routine use of CAD system
CN113052795B (en) X-ray chest radiography image quality determination method and device
CN112292691A (en) Methods and systems for improving cancer detection using deep learning
CN112101451A (en) Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks
US20230230241A1 (en) System and method for detecting lung abnormalities
US20230005138A1 (en) Lumbar spine annatomical annotation based on magnetic resonance images using artificial intelligence
CN115526834A (en) Immunofluorescence image detection method and device, equipment and storage medium
CN112508884A (en) Comprehensive detection device and method for cancerous region
CN116228660A (en) Method and device for detecting abnormal parts of chest film
CN116703901A (en) Lung medical CT image segmentation and classification device and equipment
Fonseca et al. Automatic orientation identification of pediatric chest x-rays
CN115564750A (en) Intraoperative frozen slice image identification method, intraoperative frozen slice image identification device, intraoperative frozen slice image identification equipment and intraoperative frozen slice image storage medium
CN115409812A (en) CT image automatic classification method based on fusion time attention mechanism
CN112967246A (en) X-ray image auxiliary device and method for clinical decision support system
Liu et al. A locating model for pulmonary tuberculosis diagnosis in radiographs
Elhanashi et al. Classification and Localization of Multi-type Abnormalities on Chest X-rays Images
Zhang et al. Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
CN117237351B (en) Ultrasonic image analysis method and related device
Zhao et al. Key techniques for classification of thorax diseases based on deep learning
CN116188879B (en) Image classification and image classification model training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination