CN111598895A - Method for measuring lung function index based on diagnostic image and machine learning - Google Patents

Method for measuring lung function index based on diagnostic image and machine learning Download PDF

Info

Publication number
CN111598895A
CN111598895A CN202010314852.3A CN202010314852A CN111598895A CN 111598895 A CN111598895 A CN 111598895A CN 202010314852 A CN202010314852 A CN 202010314852A CN 111598895 A CN111598895 A CN 111598895A
Authority
CN
China
Prior art keywords
lung
region
image
input
methods
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010314852.3A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Fuyuan Medical Technology Co ltd
Original Assignee
Suzhou Fuyuan Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Fuyuan Medical Technology Co ltd filed Critical Suzhou Fuyuan Medical Technology Co ltd
Priority to CN202010314852.3A priority Critical patent/CN111598895A/en
Publication of CN111598895A publication Critical patent/CN111598895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention discloses a method for measuring lung function indexes based on diagnostic images and machine learning. The content comprises the following steps: collecting a medical image of the chest from an imaging device; segmenting the chest medical image to obtain a lung region; segmenting the lung region to obtain a group of interested regions of the lung; classifying the lung according to the interested region state of the lung; reconstructing a three-dimensional lung structure according to the lung interesting region; extracting a group of features according to the structure and classification of the region of interest; and calculating the lung function index according to the extracted features.

Description

Method for measuring lung function index based on diagnostic image and machine learning
Technical Field
The present invention relates to a method for measuring lung function index based on diagnostic imaging and machine learning, and more particularly, to a method for calculating lung function index by processing chest medical images using a set of machine learning models in sequence.
Background
Lung function is critical to the health of humans. For healthy people, the distribution of alveoli in the lungs is approximately uniform. Each alveolus is composed of three elements. Two of these are lung tissue and blood vessels, which form the alveolar wall. The third is the volume within the alveoli, which contains air during breathing. During respiration, inhaled air passes through the trachea and bronchi, fills and expands the alveoli, leaves the alveoli after exchanging gas with the blood flowing through the alveoli, and the fresh blood carries oxygen through the cardiovascular system into other parts of the body. Therefore, alveolar volume (i.e., total lung capacity or TLC), having a flexible alveolar wall, and unobstructed blood flow are critical to lung function. However, approximately 2/3 people develop emphysema and other symptoms of lung disease that at least interfere with lung function.
Traditional Pulmonary Function Test (PFT) methods, such as spirometry, measure several ventilation indicators during respiration. During the examination, the examinee should keep a certain posture and breathe in certain steps under the guidance of the doctor. During the entire test, a number of respiratory cycles must be measured in this way. These requirements make the assessment of lung function index operationally dependent on the patient's specific condition and its degree of compliance, and highly subjective.
Medical imaging techniques may help physicians improve the level of diagnosis by improving the objectivity of the assessment. Medical images of the lungs can help physicians directly understand the patient's lung structure and to some extent predict the results in clinical examinations. However, the doctor needs to know the imaging mode to interpret the information needed for diagnosis from the medical image, ignore the portions irrelevant to lung function detection, and specifically identify the state of the fine lung structure (such as trachea, bronchi, alveoli and pulmonary vessels) represented by the pixel values in the image, i.e. the state of the fine lung structure in the lung image is properly segmented and the pixel values are accurately translated into the fine lung structure.
Software tools exist to better utilize information in medical images, such as segmenting lung regions in medical images of the chest, and generating diagnostic indicators from the medical images. However, these tools lack the ability to identify the fine structure of the lung and its state on a pixel scale, thus requiring additional intervention by the physician. For example, when using software tools to calculate the volume of the portion of the lung region that actually contributes to breathing, a physician may need to manually map the region of interest of the trachea, bronchi, and other anatomical structures first; these software may also require the physician to manually create a histogram describing the relationship between alveolar status and image pixel values, which significantly increases the workload of the physician.
Some image-based lung function test (PFT) software requires that a set of medical images be acquired for different time phase points in sequence throughout the respiratory cycle, and that regions with potential emphysema be located based on the degree of change in phase at different locations as a function of time. Although this technique allows better identification of emphysema regions, if the images are acquired in imaging modalities such as CT, the radiation dose absorbed by the patient is increased, which may have side effects.
In addition, the traditional lung function test method can only test the lung ventilation function generally, but cannot test the qi and blood dispersion function. The functional indexes of qi and blood dispersion such as DLCOSB and AAD02 are generally measured by a method of inhaling carbon monoxide or taking blood through a mouth. The medical image is expected to provide information required for measuring the qi-blood dispersion function, but no technology is available for directly obtaining the qi-blood dispersion function from the medical image.
In summary, the existing lung function detection technology is developed with the help of medical imaging technology and information technology, but still it takes a lot of time for a doctor to complete the detection, and the covered lung function indexes mainly include ventilation indexes and do not cover qi and blood dispersion indexes. Therefore, a method with high efficiency, robustness, intelligence and low radiation dose is still lacking in the field of lung function detection to realize accurate and comprehensive automatic lung function detection. A method based on lung fine structure and machine learning can meet this need and the required data can be a single-phase medical image or a set of consecutive multi-phase medical images without the need to provide additional input in special steps.
Disclosure of Invention
The invention provides a method for measuring lung function indexes based on diagnostic images and machine learning, which comprises the following steps:
acquiring a group of single-time-phase diagnostic chest images or sequentially acquiring a plurality of groups of diagnostic chest images at different time phases by using an imaging device; segmenting the image and determining left lung and right lung areas; further segmenting the lung area to obtain a group of lung interested areas; classifying the interesting region state; performing three-dimensional reconstruction on the region of interest; extracting a group of features according to the structure and the state classification of the region of interest; and calculating the lung function index according to the extracted features.
In some embodiments, the diagnostic chest image may be obtained using one or more of the following medical imaging modalities for radiotherapy: digital Radiography (DR), electron Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), ultrasound, or Single Photon Emission Computed Tomography (SPECT).
In some embodiments, the lung region of interest comprises: trachea, bronchi, alveolar sacs and pulmonary vessels.
In some embodiments, the classification of the state of the lung region of interest includes, but is not limited to: normal and abnormal.
In some embodiments, the segmentation method may be selected from one or more methods including: connected region analysis methods, threshold-based methods, edge detection algorithms, level set methods, active contour methods, and segmentation methods based on machine learning models for pixel-by-pixel or voxel classification.
In some embodiments, the classification method of the chest region state is based on a machine learning method, which takes the features of one image as input or combines the features of the same region under different phases of the breathing process as input.
In some embodiments, the machine learning model-based segmentation method may be composed of one or more methods including: neural networks, support vector machines, random forest methods, adaptive boosting algorithms (adaboost), and other deep learning based methods.
In some embodiments, the method based on deep learning may be composed of one or more methods including: a convolutional neural network method, a deep boltzmann machine method, a stacked noise reduction self-coding method, and a deep belief network method.
In some embodiments, the lung function indicators include: tidal Volume (VT), Inspiratory Reserve (IRV), Expiratory Reserve (ERV), Residual Volume (RV), total lung volume (TLC), inspiratory volume (IC), functional residual volume (FRC), Vital Capacity (VC), Forced Vital Capacity (FVC), forced expiratory volume for 1 second (FEV1), Forced Expiratory Flow (FEF), Forced Inspiratory Flow (FIFS), maximum spontaneous ventilation (MVV), pulmonary dispersion function (DLCOSB), and alveolar arterial oxygen partial pressure difference (AAD 02).
Drawings
The following description is intended to more clearly describe the invention in connection with the detailed description thereof and not to limit the scope of the invention, which is defined by the appended claims.
Figure 1 is a schematic representation of the lung architecture followed by the present invention in its design.
FIG. 2 is a flowchart of a lung function detection method based on three-dimensional images according to the present invention
Fig. 3 is a schematic diagram of an embodiment designed based on the method of the present invention, including: the imaging device is a software and hardware component contained in the system for detecting the lung function according to the image, and a user receives and views the remote application interface device of the lung function index.
FIG. 4 is a schematic diagram of an embodiment of the present invention that uses a support vector machine method to segment lung regions.
FIG. 5 is a schematic diagram of an embodiment of the present invention that uses a deep convolutional neural network approach to segment lung regions.
Fig. 6 is a schematic diagram of an embodiment of the present invention that uses a deep convolutional neural network approach to state classification of a lung region of interest.
FIG. 7 is a schematic diagram of an embodiment of the present invention that uses a support vector machine method to calculate a lung function index.
Detailed Description
In the following description, reference is made to the accompanying drawings and specific examples, which illustrate the preferred embodiments of the present invention and, therefore, will be understood by those skilled in the art how the following detailed description of the preferred embodiments of the present invention may be made.
The innovative concepts embodied in the present invention can be embodied in a wide variety of embodiments. Therefore, the detailed description of the preferred embodiments should not be read as limiting the claims to the innovative concepts presented herein, but rather as an aid to those skilled in the art in understanding the innovative concepts contained herein. In addition, the size and relative size of the layers and portions of the objects in the schematic diagram are appropriately modified to avoid overlap.
A description object of an illustration is indicated in the diagram in the form of a reference number. However, in the schematic diagrams of the embodiments of the present invention, not all the components are numbered. The reasons include: 1) information disclosed in the related art will not be described in detail herein; 2) the parts overlapping with the context will not be described in detail.
The present invention is described herein using the term "comprising" to refer to the objects contained in the present invention. If not explicitly stated in the context, such reference is to be an inexhaustible recitation that omits some information from the description and is believed to not affect the understanding of the methods of the present invention by those skilled in the art.
The detailed description herein may use certain terminology with precedence relationships to facilitate the listing of objects to be described. These terms should not be construed as limiting the composition and structure of the method of the invention, but should be construed as merely providing temporary labels to distinguish between the items so described, and transposing the order of the items so described, e.g., the first item and the second item, without affecting the description herein of the innovative concepts of the invention. Similarly, when the term "and/or" is used herein to connote a group of statements, there is no intention to be bound by any expressed combination of the order in which such statements occur. Any change in the order of presentation of these objects is acceptable.
Unless specific terms are set forth herein to give a unique definition to a given term, the terms used in the detailed description are consistent with the usage habits of those skilled in the art to which the present invention pertains. Additionally, the description herein may use certain words of everyday usage to describe the present invention. If the reader finds that such terms are defined in an idealized or highly formal sense, and are not inconsistent with the context in which they are used, then unless a clear definition is given, it should be understood that such terms as used herein have been adjusted to conform to the usage conventions of the field based on the interpretation of the various dictionaries of the general purpose.
"pulmonary function" refers to the respiratory capacity of the lungs.
"pulmonary function test" or "PFT" is a complete assessment of the respiratory system, taking into account factors including: patient history, physical examination, chest X-ray examination, arterial blood gas analysis, and testing for lung function. The primary purpose of lung function testing is to determine the severity of lung injury. The lung function test has the functions of diagnosis and treatment coordination, and can help a clinician obtain some information about lung diseases of a detected object. Lung function detection is typically performed by a respiratory therapist using a spirometer or the like.
A "lung function index" is a set of measurements provided by a lung function test. These measurements include, but are not limited to: tidal Volume (VT), Inspiratory Reserve (IRV), Expiratory Reserve (ERV), Residual Volume (RV), total lung volume (TLC), inspiratory volume (IC), functional residual volume (FRC), Vital Capacity (VC), Forced Vital Capacity (FVC), forced expiratory volume for 1 second (FEV1), Forced Expiratory Flow (FEF), Forced Inspiratory Flow (FIFS), and maximum spontaneous ventilation (MVV). Other indices, such as FEV 1%, FEV 1% ═ FEV1/FVC, can be calculated from these indices to better describe an aspect of the patient's lung function.
The "alveolar sacs" or "small groups of alveoli" are located at the junction of the pulmonary artery and the pulmonary vein. Fresh air enters the alveolar sac from the ends of the bronchi and alveolar ducts, and is in gas exchange with the pulmonary artery and pulmonary vein. In most cases, the bronchioles are in the form of "branches," each of which terminates in several adjacent alveolar sacs.
The "alveolar state" is used to describe the behavior of the alveoli during respiration. When a patient can do normal daily activities, most of the alveoli of the patient should be in a "normal" state. The various states of "dysfunction" or "respiratory disorder" can be divided into four broad categories: obstructive diseases, such as emphysema; restrictive diseases, such as fibrosis; vascular diseases, such as pulmonary embolism; and other diseases. For example, smoking, air pollution or harsh working conditions can cause dust to collect in certain parts of the bronchi, particularly the bronchioles, and prevent air from entering and exiting the associated alveolar sacs, possibly leaving the alveoli therein in a collision state for a long period of time. Whereas premature infants lack so-called "alveolar type II cells" due to lung hypoplasia, possibly leading to alveolar collapse. Furthermore, an occlusion of a pulmonary artery or vein may obstruct or even stop the blood flow in that portion of the lung. Although this has a relatively small effect on the behavior of the alveoli, it significantly changes the appearance of this part of the blood vessel in medical images of the chest.
A "breathing cycle" or "inhale-exhale cycle" includes a full inhalation process followed by a full exhalation process. A person experiences many respiratory cycles per day. During inspiration, fresh air enters the alveolar region along the bronchi and distal alveolar ducts to inflate them. The alveoli that contain fresh air exchange gas with the pulmonary arteries and veins surrounding the alveoli. The exhalation process then begins and the alveoli contract, returning the air containing the exhaust gas to the atmosphere.
The "time point of a breathing cycle" refers to the time between the beginning and the end of the breathing cycle. Images shot at the same time phase point of different respiratory cycles are similar; images taken at different phase points differ significantly due to changes in alveolar volume.
A "diagnostic medical imaging technique" is a technique of taking a set of images of some organs or tissues inside the human body to provide a visualized processed object for clinical analysis or medical purposes. The technique can be used to reveal internal structures of the body hidden by skin and bones to facilitate disease diagnosis for a doctor. The technique can also be used to build a database describing normal anatomical and physiological features that can be contrasted to facilitate physician identification of abnormalities. In general, techniques for imaging excised organs and tissues for medical reasons are not considered diagnostic medical imaging techniques, but are considered part of the pathology. In a broader sense, diagnostic medical imaging techniques can be considered as belonging to the same field as radiology, which also includes: radiology in imaging techniques such as X-ray imaging, magnetic resonance imaging, medical ultrasound imaging, endoscopy, ultrasound elastography, tactile imaging, thermal imaging, medical photography, and the like. Hy and nuclear medicine functional imaging techniques such as Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography (SPECT).
In particular, CT imaging techniques generate images reflecting structures of the human body by means of a computer-controlled X-ray irradiation process. In CT scanning, an X-ray tube is moved around a patient and emits X-rays toward the body under computer control. The emitted X-rays may be attenuated during transmission. A set of linear detectors is positioned on the other side of the body in the path of the X-rays for receiving a portion of the X-rays that pass through the body. The intensity distribution of the X-rays reaching the detector is not uniform due to the different attenuation that occurs when the X-rays pass through different tissues. The X-ray intensity distribution data received by the detector may constitute projection data, and then a set of images is reconstructed using a back-projection algorithm to reflect the attenuation of the X-rays in the human body, i.e., a set of images reflecting the tissue density distribution of the human body. Typically, the resulting image consists of a set of slices, each slice reflecting the density distribution within a slice of human tissue of a certain thickness. Thus, a three-dimensional CT scan image can be obtained.
MRI uses a device moving on a circular orbit capable of generating a strong magnetic field to form radio waves for irradiating a patient located near the center of the circle, causing the patient tissue to emit radio waves reflecting the information itself. The frequency distribution of radio waves emitted by different human tissues including tumors, depending on their chemical composition, will have a relatively large intensity at the frequency corresponding to the particular composition. Thus, a human tissue image that can be used for display can be obtained. Similar to the CT scan results, MRI can produce three-dimensional images of designated parts of the human body; unlike CT scanning, MRI generates easily distinguishable images of human soft tissue that have close densities and different compositions.
PET scanning can be used to generate images that reflect dynamic chemical processes occurring within human tissue. For example, sugar metabolism. Typically, a small amount of sugar powder is radiolabeled and mixed with normal sugar prior to PET scanning. The injection solution made from the mixture is injected into the patient. Since tumor cells consume or absorb sugar at a higher rate than normal human tissue, more radioactive material is accumulated near the tumor site for marking. PET scans can be used to determine the distribution of sugars within a tumor or in a human body at a given time. In some embodiments, fusing the CT scan results with PET images may better distinguish normal tissue from abnormal tissue.
SPECT records data using radiotracers and scanning equipment, and uses a computer to reconstruct two-dimensional or three-dimensional images. In SPECT scanning, a small amount of radiopharmaceutical is injected into an artery, and then a detector is used to track the distribution of the radioactive material in the body, and a detailed image reflecting the internal structure of the body is obtained based on the obtained data. The SPECT scan results can be used to describe the blood flow and metabolism of human tissue.
A "region of interest (ROI)" is a subset of samples selected for clinical purposes in a set of medical images. In the field of radiation therapy, this concept can be defined in a discretized manner: a sub-region comprising a set of pixels in a slice of the two-dimensional medical image, or a sub-region comprising a set of voxels in the reconstructed three-dimensional image; this concept can also be defined in a continuous manner: a region enclosed by a continuous closed curve in a slice of a two-dimensional medical image, or a region guarded by a closed curved surface in a reconstructed three-dimensional image. For example, the ROI may be a region containing alveoli or pulmonary vessels used to predict lung function.
"image segmentation" refers to identifying one or more objects in an image and presenting the boundaries of the area occupied by these objects. In the field of radiology, "image segmentation" refers to providing a set of corresponding ROIs for a set of medical images to provide valuable information for the diagnosis of disease. The result of "image segmentation" is a "segmentation", i.e., labeling pixels in an image with a set of ROI numbers that correspond one-to-one to different ROIs. Accurate segmentation helps to effectively diagnose the condition. And automatic segmentation helps to reduce the workload of reading and diagnosing.
The "machine learning method" refers to a computer science method proposed in 1959 by asture, zegmuir (Arthur Samuel) "in which a computer can learn without explicit programming. It has evolved from research on pattern recognition, and research on computational learning theory in the field of artificial intelligence. Research on machine learning methods explores such possibilities: a general algorithm is constructed for modeling in a data-driven manner based on input samples, so that the model can be predicted or selected using the same type of input data without relying on pre-designed static instructions. Machine learning methods may be used when conventional, precisely designed and programmed algorithms do not work well or are otherwise unusable. For example, semantic segmentation is performed on medical images. In the aspect of segmenting medical images, reference data containing labeling information of pixel scales can be used for training a machine learning model so as to perform pixel-level automatic labeling on medical images of the same type. In interpreting medical images, image data with case information that can be used as a label can be used to train a machine learning model to make a diagnosis about the condition of the medical image of the same kind.
The "support vector machine" (SVM) method refers to a supervised machine learning model for classification and regression analysis of target data. When training the model, the data in the training set need to be labeled, so that each data belongs to one of two categories. After the model is trained using such a training set, a non-probabilistic binary linear classifier can be obtained, i.e., one that can be used to classify new input data into only one of two classes. The SVM model may be mathematically viewed as a set of maps representing the input data as a set of points in data space; the parameters of the SVM model correspond to a connected banded region in the data space. Under proper mapping, the width of the region takes a maximum value and data points belonging to the same class are located on the same side of the region. Thus, new data can be projected to a point in the data space using such a mapping and sorted according to the point's relationship to the banded region. Further, this data space may be implicitly mapped to a high-dimensional space using so-called "kernel techniques" so that the SVM model may perform linear classification, i.e., non-linear classification, on the input data within the high-dimensional space.
"deep learning method" refers to a class of machine learning algorithms that have the following characteristics: (1) feature extraction and conversion is performed using a cascade of multiple layers of nonlinear processing units. Each layer uses the output of the previous layer as input. Algorithms, which may be supervised or unsupervised, may be used for pattern analysis (unsupervised) and classification (supervised); (2) extracting features from the input data at multiple levels in an unsupervised manner; extracting the higher-level features from the lower-level features, thereby forming a hierarchical feature structure from bottom to top; (3) the method can be regarded as belonging to the field of machine learning with other non-deep machine learning methods; (4) from bottom to top, the hierarchical feature structure can be regarded as corresponding to a set of concepts with different abstraction degrees one by one. The observations used to train the deep learning model, such as an image, may be described in different ways. For example, it may be represented as a set of multi-channel pixels, or a set of geometric figures.
"deep convolutional neural network" refers to a feedforward artificial neural network whose connection pattern between neurons is inspired by animal visual cortex tissue. Individual neurons respond to stimuli from a limited spatial region called the receptive field; the receptive fields of different neurons may partially overlap. The process by which a single neuron reacts to stimuli in its receptive field and forms an output signal can be approximated as a mathematical convolution operation. Convolutional neural networks can be viewed, approximately, as a simulation of the biologically visual forming process, a variant of multi-layered perceptrons for processing data and forming outputs with minimal computational effort.
A set of embodiments will be described herein to illustrate the precise automatic lung function detection method of the present invention. The method can be used for performing automatic lung function detection on a detected object and generating a group of diagnostic indexes.
A set of pictures is provided herein as a schematic illustration of an embodiment of the invention or a concept followed by an embodiment of the invention. In which fig. 1 is a schematic diagram of a lung structure to which the present invention is designed to follow. The present invention is based on the flowchart shown in fig. 2 to acquire and process a set of medical images of the breast. FIG. 3 is a system diagram of one embodiment of the present invention. In particular embodiments, machine learning methods for implementing automatic detection of lung function include, but are not limited to: the lung and the lung internal region of interest are segmented 300 in the chest image by using an SVM method, the lung and the lung internal region of interest are segmented 400 in the chest image by using a DCNN method, the lung region of interest is subjected to state classification 500 by using the DCNN method, the lung region of interest is subjected to state classification 600 by using the SVM method, and the lung function index is predicted 700 by using the SVM method.
Fig. 1 is a schematic diagram 100 of a human lung structure. The figure omits some anatomical structures that do not participate in the respiratory process. Anatomical structures in this figure that do not contribute to lung volume include: larynx 110, trachea 120, bronchi 130-including primary bronchi 131, secondary bronchi 132, tertiary bronchi 133, bronchioles 134. The remaining lung tissue depicted in this figure contains a large number of alveoli. Since a single alveolus is about 0.2mm in diameter, the area covered by a pixel in medical images of the thorax usually contains a group of adjacent alveolus, i.e. a alveolar sac connected to the end of a bronchi in a tree-like distribution. The state of these lung vesicles can be identified on a pixel scale in the lung region of a chest image using appropriate methods.
In practical use, a "slice" in the target set image contains information captured from a human tissue having a certain thickness according to a certain imaging mode. Therefore, as the thickness increases, the image in the resulting slice becomes more blurred. Therefore, it is necessary to perform three-dimensional reconstruction of the pulmonary alveolar region based on the information contained in the chest image in order to accurately predict the lung function index.
Fig. 2 is a flowchart 200 of lung function detection based on diagnostic imaging and machine learning methods.
The flowchart starts in step 201.
In step 210, a set of diagnostic chest images including the lung region 100 is acquired as a target set.
In step 220, a chest region is segmented in the chest image using a set of methods.
In step 221, a region of interest containing alveoli is segmented in the lung region of the chest image.
In step 222, the region of interest including alveoli is state classified into two or more groups according to the information of the chest image. For example, there are a classification into a functional normal and a functional abnormal.
In step 230, a three-dimensional alveolar region is reconstructed based on the information of the chest image, the region of interest including the alveoli and the state classification thereof.
In step 231, features are extracted from the chest image, the reconstructed three-dimensional alveolar region, and the state classification of the alveolar region.
In step 232, a set of lung function indicators is calculated based on the extracted features.
The workflow ends at step 202.
Fig. 3 is a diagram 300 of a system and other related apparatus for processing a chest image for lung function detection, in accordance with an embodiment of the present invention. According to the exemplary workflow diagram 200, when the system in this embodiment is activated, the lung function detection system 302 acquires a set of diagnostic images from the imaging device 301 as a target set according to the DICOM protocol 304, and generates a set of indicators 305 after processing. A user at the remote user interface device 303 may view these metrics 305 through a user interface 311.
In one embodiment, the components of a pulmonary function detection system 302 include: a processor 310, a user interface 311, a storage device 312, and a memory 313. The processor 310 is configured to execute the code of the lung function detection engine 313 read into the memory 313. The user may operate on the engine and view the results through the user interface 311. The storage device 312 stores a set of profiles 320 of these programs, and/or a set of machine learning models that are used by the engine. When the system is started, the programs and files are read into the memory and the work is completed under the call of the PFT engine. An object set 321 composed of diagnostic images acquired from the imaging device is also stored on the storage device 312. After the PFT engine is read into memory, the engine in turn invokes its components to process the diagnostic image 321 for generating a set of metrics. The user may view these metrics on the user interface 311 or on the remote user interface device 303.
In particular, lung function detection engine 313 in one embodiment is comprised of three software components, including: a segmentation component 330, a classification component 331, and an index component 332. When the system 302 in an embodiment is started, a set of operations reads the program of the lung function detection engine 330 in the storage device 320 into the memory. The operation 340 reads the partition component 330 and its configuration file, the operation 341 reads the classification component 331, and the operation 342 reads the index component 332. These components are invoked sequentially after the lung function detection system 302 is activated.
First, the lung function detection engine 313 reads the diagnostic chest image 321, and passes it to the segmentation component 320, which generates a set of labeled images representing the region of interest of the alveolar region.
Next, the lung function detection engine 313 delivers the diagnostic chest image 321 and the labeled image to the classification component 331, and generates a set of labeled images representing the state classification of the alveoli of the region of interest. The classification of the status of alveoli within the region of interest includes: functional normal, functional abnormal, and other states.
Then, the lung function detection engine 313 transmits the diagnostic chest image 321, the region-of-interest labeling image, and the region-of-interest state labeling image to the index component 332. The component reconstructs three-dimensional alveolar regions based on the obtained inputs, identifies and extracts alveolar sac features, and calculates lung function indices based on the features and inputs.
Finally, the lung function detection engine 313 passes the lung function index to the user interface 311 or on to the remote user interface device 303 as required.
Fig. 4 is a diagram 400 of segmentation of lung regions and regions of interest within the lungs using a machine learning model designed according to a Support Vector Machine (SVM) method in one embodiment. Before the model is used for segmentation, the model needs to be trained or initialized according to the settings of the trained model. The process of training an SVM model may include three components.
At 411, a plurality of local images are extracted from the lung image 420 to form a set of data as input to the model. For data points from the same local area, the data points can be classified into two categories, namely, inside 432 and outside 430 of the designated area; a boundary 431 exists between these two types of data points to separate them. This classification may serve as a label for the training data.
The SVM model transforms the data points as input to a set of high-dimensional data points using a kernel function 440, at element 412. Because the input data points correspond to the high-dimensional data points one to one, a corresponding pair of data points have the same label.
In part 413, the parameter combinations are searched for in the high-dimensional space where the high-dimensional data points are located with respect to the hyperplane equation such that the classification of the high-dimensional data points by the hyperplane is consistent with the labeling information. At this time, the region separating the two types of data points has the maximum width, and a group of high-dimensional data points can be found on the boundaries on both sides of the separation region, respectively. If no usable hyperplane 451 can be found as determined by the screening, then return to element 412 and pick kernel 440 again.
In actual use, an SVM model containing the kernel 440 and an expression of the hyperplane 451 may be used to classify points in the input image.
The segmentation component 330 can use a trained or reconstructed SVM model based on the trained model settings to segment the lung region and the region of interest within the lung. The component extracts a set of data 420 from the input image and passes it to the SVM model. The SVM model will process the data using the kernel function 440 and the hyperplane expression 451, splitting the input data into data points located inside and outside the specified region of interest. The classification result constitutes a labeled image 403 of the input image. The processing results are collected by the lung function detection engine 313 and passed on to subsequent steps for further processing.
Fig. 5 is a diagram 500 of a method for segmenting lung regions and regions of interest within the lungs using a Depth Convolution Neural Network (DCNN) based approach in one embodiment. The deep convolutional neural network in this schematic contains three transforms, seventeen layers of neurons, for sequentially transforming the chest image as input into sixteen sets of multi-channel feature maps as intermediate results and a set of multi-channel score maps as output. In the output score map, the score of each channel can be regarded as a probability distribution, which represents the prediction of the region of interest corresponding to the channel. Therefore, an output multichannel score map can be combined into a labeled image for the segmentation of a corresponding chest image in the input image by the depth-describing convolutional neural network model.
The first transform is the convolution + ReLU transform 501. The layers that transform feature map 531 into feature map 532 are given as an example. When the layer is transformed, the field with the size of 3 × 3 channels is extracted in the feature map 531 in an ergodic manner with step size of 1, and then the field is processed by using a convolution filter, that is, the field is convolved with a kernel matrix with the same size. The convolution result is processed by using the ReLU filter, and the processing result can be used as the pixel value in the channel corresponding to the group of filters on the multi-channel pixel corresponding to the receptive field in the multi-channel feature map output by the layer. Thus, by successively using the transform processing of the layer to ergodically extract the respective receptive fields from the feature map 531, a multi-channel feature map having the same spatial size as the input image can be obtained as the output of the layer, i.e., the feature map 532
The second transformation is a pooling transformation 502. The layer in which the feature map 543 is converted into the feature map 551 is taken as an example. When the layer is transformed, the sense field with the size of 2 × channel number is extracted in the feature map 543 in a traversal manner with the step size of 2, so as to generate a feature map 551 with the size of one fourth of the map as the output of the layer.
The third transformation occurs only in the last layer, and is used to transform the rough normalized score map 573 into a multi-channel score map having the same spatial size as the fused image of the input DCNN as an output of the layer and the DCNN using an upsampling method. The output score map can be used as a segmentation 503 of the alveolar region inside the lung in the input image.
After the segmentation component 330 has reconstructed a usable DCNN model based on reading the file from the storage device 320 via the operation 340, the component processes the input image according to the following steps:
at the input section 510:
the component reads an input image 511 of size [512 x 1], processes the image using a set of non-deep learning methods, resulting in an initial labeled image 512 of size [512 x 1 ]. The input image 511 and the initial annotation image 512 are merged into a merged image with a size of [512 x 2 ].
At element 520:
the first level of transformation in section 520 includes 64 sets of convolution + ReLU filters for receiving the fused image 510 as input and generating a [512 by 64] feature map 521 as output;
the second level transformation of section 520 includes 64 sets of convolution + ReLU filters for receiving the feature map 521 as input, generating a [512 by 64] feature map 522 as output;
the third tier of transformation in section 520 includes 128 sets of pooling filters for receiving the feature map 522 as input, generating a [256 x 128] feature map 531 as output;
at part 530:
the first level of transformation in section 530 comprises 128 sets of convolution + ReLU filters for receiving fused image 531 as input and generating a [256 × 128] feature map 532 as output;
the second level transformation of section 530 comprises 256 sets of pooling filters for receiving the feature map 532 as input, generating a [128 x 256] feature map 541 as output;
at part 540:
the first level transformation of section 540 comprises 256 sets of convolution + ReLU filters for receiving feature map 541 as input, generating a [128 × 256] feature map 542 as output;
the second tier transformation of section 540 comprises 256 sets of convolution + ReLU filters to receive feature map 542 as input, and generate as output a [128 × 256] feature map 543;
the third tier of transformation in section 540 comprises 512 sets of pooling filters for receiving the feature map 543 as input, generating a [64 x 512] feature map 551 as output;
at part 550:
the first level of transformation in section 550 comprises 512 sets of convolution + ReLU filters for receiving as input the signature graph 551 and generating as output a [64 x 512] signature graph 552;
the second level transformation of section 550 comprises 512 sets of convolution + ReLU filters for receiving feature map 552 as input, generating as output a feature map 553 of [64 × 512 ];
the third tier of transformation in section 550 comprises 512 sets of pooling filters receiving as input the feature map 553, generating as output a [32 x 512] feature map 561;
at element 560:
the first level of transformation in section 560 comprises 512 sets of convolution + ReLU filters for receiving the feature map 561 as input, generating a [32 x 512] feature map 562 as output;
the second level transformation of section 560 contains 512 sets of convolution + ReLU filters for receiving feature map 562 as input, generating as output a [32 x 512] feature map 563;
the third tier of transformation in section 560 comprises 4096 sets of pooling filters for receiving as input the profile 563 and generating as output a profile 571 of [16 x 4096 ];
at part 570:
the first level of transformation at part 570 comprises 4096 sets of convolution + ReLU filters for receiving as input the feature map 571 and generating as output a [16 x 4096] feature map 572;
the second hierarchical transformation of section 570 comprises a set of convolution + ReLU filters equal in number to the number of regions of interest, receiving as input a signature graph 572, generating as output a [16 × 16 regions of interest ] signature graph 573.
At part 580:
section 580 uses the above sampling method and interpolation method for receiving the feature map 573 as input, generating a labeled image having the same spatial dimensions as the input image 510. Each channel of the signature 573 can be used to locate a region of interest in the input image corresponding to that channel, such location can constitute a segmentation 580 of the input image by the DCNN model herein.
Thus, the DCNN model receives the medical image 512 and the corresponding initial annotation map 511 as input, and generates an annotation image 580 as a segmentation of the medical image 512 by the model.
FIG. 6 is a diagram 600 illustrating state classification of a region of interest within a lung using a method based on a Deep Convolutional Neural Network (DCNN) in one embodiment. The deep convolutional neural network in this schematic contains three transforms, seventeen layers of neurons, for sequentially transforming the chest image as input into sixteen sets of multi-channel feature maps as intermediate results and a set of scoring maps as output. The output score map can be used as a prediction of the state of the alveolar region in the input image by the DCNN model.
The first transform is the convolution + ReLU transform 601. The layer for converting the feature map 631 into the feature map 632 is taken as an example. When the layer is transformed, the field with the size of 3 × 3 channels is extracted in the feature map 631 by traversal with step size 1, and then the field is processed using a convolution filter, that is, the field is convolved with a kernel matrix with the same size. The convolution result is processed by using the ReLU filter, and the processing result can be used as the pixel value in the channel corresponding to the group of filters on the multi-channel pixel corresponding to the receptive field in the multi-channel feature map output by the layer. Thus, by successively using the transform processing of the layer to traverse each of the receptive fields from the feature map 531, a multi-channel feature map having the same spatial size as the input video can be obtained as the output of the layer, i.e., the feature map 632
The second transformation is a pooling transformation 602. The layer in which the feature map 643 is converted into the feature map 651 is taken as an example. When the layer is transformed, the field with the size of 2 × channel number is extracted in the feature map 643 in a traversal manner with the step size of 2, and the field is used to generate the feature map 651 with the size of one fourth of the map as the output of the layer.
The third transformation occurs only in the last layer and is used to transform the rough segmented score map 673 into a score map having the same spatial size as the fused image of the input DCNN as the output of the layer and the DCNN using an upsampling method. The output score map can be used as a prediction 503 of the state of the alveolar region inside the lung in the input image by the DCNN model.
After the classification component 331 has reconstructed a usable DCNN model based on reading the file from the storage device 320 via the operation process 341, the component processes the input image according to the following steps:
at the input section 610:
the component reads as input a fused image 610 of size [512 x 2 ]. One of the channels 612 is an image of the breast taken from the target set 321. Another channel 611 is a label image from the segmentation component 330.
At element 620:
the first level of transformation in section 620 includes 64 sets of convolution + ReLU filters for receiving the fused image 510 as input, generating a [512 by 64] feature map 621 as output;
the second level transformation of section 620 contains 64 sets of convolution + ReLU filters for receiving as input feature map 621, generating as output a [512 by 64] feature map 622;
the third tier transformation of section 620 comprises 128 sets of pooling filters for receiving feature map 622 as input, generating a [256 x 128] feature map 531 as output;
at element 630:
the first level transformation of section 630 comprises 128 sets of convolution + ReLU filters for receiving fused image 631 as input and generating a [256 × 128] feature map 632 as output;
the second hierarchical transformation of section 630 comprises 256 sets of pooling filters for receiving the feature map 632 as input, generating a [128 x 256] feature map 641 as output;
at part 640:
the first level transformation of section 640 comprises 256 sets of convolution + ReLU filters for receiving the signature map 641 as input, generating a [128 × 256] signature map 642 as output;
the second tier transformation of section 640 comprises 256 sets of convolution + ReLU filters to receive feature map 642 as input, generating as output a [128 × 256] feature map 643;
the third tier of transformation in section 640 comprises 512 sets of pooling filters for receiving as input the feature map 643 and generating as output a [64 x 512] feature map 651;
at element 650:
the first level of transformation in section 650 comprises 512 sets of convolution + ReLU filters for receiving as input a feature map 651 and generating as output a [64 x 512] feature map 652;
the second level transformation of section 650 comprises 512 sets of convolution + ReLU filters for receiving as input a profile 652 and generating as output a profile 653 of [64 x 512 ];
the third tier of transformation in section 650 comprises 512 sets of pooling filters for receiving the signature 653 as input, generating a [32 x 512] signature 661 as output;
at part 660:
the first level of transformation in section 660 comprises 512 sets of convolution + ReLU filters for receiving as input a signature graph 661, and generating as output a [32 x 512] signature graph 662;
the second level transformation of section 660 comprises 512 sets of convolution + ReLU filters for receiving feature map 662 as input and generating a [32 x 512] feature map 663 as output;
the third level of transformation of section 660 comprises 4096 sets of pooling filters for receiving as input the profile 663, generating as output a [16 x 4096] profile 671;
at part 670:
part 670 of the first level of transformation comprises 4096 sets of convolution + ReLU filters for receiving the signature 671 as input and generating as output a [16 x 4096] signature 672;
the second hierarchical transformation of section 670 includes a set of convolution + ReLU filters, equal in number to the number of regions of interest, for receiving as input a feature map 672, generating as output a [16 × 16 regions of interest ] feature map 673.
At section 680:
section 680 uses the above sampling method and interpolation method to receive feature map 673 as input, and generates a labeled image having the same spatial dimensions as input image 610. Each channel of the feature map 673 may be used to describe a state classification of a region of interest corresponding to that channel in the input image, such as: alveolar normal or alveolar abnormal. Such classification of the state of the alveolar region may constitute prediction 680 of the state of the alveolar region in the input image by the DCNN model herein.
FIG. 7 is a diagram 700 of reconstructing three-dimensional alveolar regions and extracting features and further computing lung function indices using a machine learning model designed according to a Support Vector Machine (SVM) method, in one embodiment. Before using the model to calculate the lung function index, the model needs to be trained or initialized according to the settings of the trained model. The process of training an SVM model may include three components.
At 711, a set of data is extracted from the lung image 720 to form a set of partial images as input to the model. For data points from the same local region, in combination with the alveolar region of interest and its status classification 702, the data points fall into two categories, namely, interior 730 and exterior 732, located in the designated region; there is a boundary 731 between these two types of data points to separate them. This classification may serve as a label for the training data. A more complete annotation can be obtained when the alveolar region of interest and its state classification 703 for other phases can additionally be provided as input.
The SVM model transforms the data points as input into a set of high-dimensional data points using a kernel function 740, at 712. Because the input data points correspond to the high-dimensional data points one to one, a corresponding pair of data points have the same label.
In section 713, the combination of parameters is searched for in the high-dimensional space where the high-dimensional data points are located with respect to the hyperplane equation such that the classification of the high-dimensional data points by the hyperplane is consistent with the labeling information. At this time, the region separating the two types of data points has the maximum width, and a group of high-dimensional data points can be found on the boundaries on both sides of the separation region, respectively. If no usable hyperplane 751 can be found as determined by the screening, then return to part 712, and re-pick kernel 740.
In actual use, an SVM model containing the expression of the kernel function 740 and the hyperplane 751 may be used to classify points in the input image.
The indicator component 332 can reconstruct the three-dimensional alveolar region using a trained SVM model or reconstructed from the trained model settings. The component extracts a set of data 720 from the input image and passes it to the SVM model. The SVM model processes the data using a kernel function 740 and a hyperplane expression 751, dividing the input data 720 into data points located inside and outside the specified region of interest. The classification results form a labeled image of the three-dimensional alveolar region. The lung function detection engine 313 extracts features related to the alveolar region according to the fused image 701 of the input image and the lung interior region of interest, the alveolar region of interest state classification 702 and/or the alveolar region of interest state classification 703 of other time phases, and the labeled image of the three-dimensional alveolar region provided by the SVM model, and calculates a lung function index 705 according to the extracted features.
Finally, the lung function detection engine 313 passes the lung function index to the user interface 311 or on to the remote user interface device 303 as required.
Although specific embodiments of the invention have been described herein with reference to specific examples, it will be understood by those skilled in the art that: various embodiments that can be used in the same context can be made by substituting equivalent components or other methods in the embodiments without departing from or departing from the innovative concepts and methods of the present invention, and the embodiments can be made to function in the same way depending on the specific environment of use, requirements, available materials, composition of processing objects, or requirements for workflow. Such modifications are intended to fall within the scope of the appended claims without violating or deviating from the innovative concepts and methods presented herein.

Claims (9)

1. A method for measuring lung function index based on diagnosis image and machine learning comprises the following steps:
acquiring a group of single-time-phase diagnostic chest images or sequentially acquiring a plurality of groups of diagnostic chest images at different time phases by using an imaging device; segmenting the image and determining left lung and right lung areas; further segmenting the lung area to obtain a group of lung interested areas; classifying the interesting region state; performing three-dimensional reconstruction on the region of interest; extracting a group of features according to the structure and the state classification of the region of interest; and calculating the lung function index according to the extracted features.
2. The diagnostic chest image of claim 1 may be obtained using one or more of the following medical imaging modalities for radiation therapy: digital Radiography (DR), electron Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), ultrasound, or Single Photon Emission Computed Tomography (SPECT).
3. The lung region of interest of claim 1 comprising: trachea, bronchi, alveolar sacs and pulmonary vessels.
4. The pulmonary region of interest status classification of claim 1 includes but is not limited to: normal and abnormal.
5. The segmentation method of claim 1, which may be comprised of one or more selected methods, including: connected region analysis methods, threshold-based methods, edge detection algorithms, level set methods, active contour methods, and segmentation methods based on machine learning models for pixel-by-pixel or voxel classification.
6. The method of classifying the state of the chest region as claimed in claim 1 is based on machine learning, which takes the features of one image as input or combines the features of the same region at different phases of the respiratory process as input.
7. The machine learning model-based segmentation method of claims 1, 5 and 6 may consist of one or more methods including: neural networks, support vector machines, random forest methods, adaptive boosting algorithms (adaboost), and other deep learning based methods.
8. The deep learning-based approach of claim 7 may consist of one or more approaches including: a convolutional neural network method, a deep boltzmann machine method, a stacked noise reduction self-coding method, and a deep belief network method.
9. The pulmonary function index of claim 1 comprising: tidal Volume (VT), Inspiratory Reserve (IRV), Expiratory Reserve (ERV), Residual Volume (RV), total lung volume (TLC), inspiratory volume (IC), functional residual volume (FRC), Vital Capacity (VC), Forced Vital Capacity (FVC), forced expiratory volume for 1 second (FEV1), Forced Expiratory Flow (FEF), Forced Inspiratory Flow (FIFS) and maximum spontaneous ventilation (MVV), pulmonary dispersion function (DLCOSB) and alveolar arterial oxygen partial pressure difference (AADO 2).
CN202010314852.3A 2020-04-14 2020-04-14 Method for measuring lung function index based on diagnostic image and machine learning Pending CN111598895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010314852.3A CN111598895A (en) 2020-04-14 2020-04-14 Method for measuring lung function index based on diagnostic image and machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010314852.3A CN111598895A (en) 2020-04-14 2020-04-14 Method for measuring lung function index based on diagnostic image and machine learning

Publications (1)

Publication Number Publication Date
CN111598895A true CN111598895A (en) 2020-08-28

Family

ID=72181765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010314852.3A Pending CN111598895A (en) 2020-04-14 2020-04-14 Method for measuring lung function index based on diagnostic image and machine learning

Country Status (1)

Country Link
CN (1) CN111598895A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102333A (en) * 2020-09-02 2020-12-18 合肥工业大学 Ultrasonic region segmentation method and system for B-ultrasonic DICOM (digital imaging and communications in medicine) image
CN113823413A (en) * 2021-10-22 2021-12-21 上海长征医院 Lung function small airway disease prediction system, method, medium and electronic device
CN116564527A (en) * 2023-07-11 2023-08-08 南京裕隆生物医学发展有限公司 Respiratory health analysis method based on physiological feature deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040101176A1 (en) * 2002-11-27 2004-05-27 General Electric Company Method and system for measuring disease relevant tissue changes
CN103068312A (en) * 2010-08-27 2013-04-24 柯尼卡美能达医疗印刷器材株式会社 Diagnosis assistance system and program
CN105101878A (en) * 2013-04-05 2015-11-25 东芝医疗系统株式会社 Medical image processing apparatus and medical image processing method
CN106934228A (en) * 2017-03-06 2017-07-07 杭州健培科技有限公司 Lung's pneumothorax CT image classification diagnostic methods based on machine learning
CN110619639A (en) * 2019-08-26 2019-12-27 苏州同调医学科技有限公司 Method for segmenting radiotherapy image by combining deep neural network and probability map model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040101176A1 (en) * 2002-11-27 2004-05-27 General Electric Company Method and system for measuring disease relevant tissue changes
CN103068312A (en) * 2010-08-27 2013-04-24 柯尼卡美能达医疗印刷器材株式会社 Diagnosis assistance system and program
CN105101878A (en) * 2013-04-05 2015-11-25 东芝医疗系统株式会社 Medical image processing apparatus and medical image processing method
CN106934228A (en) * 2017-03-06 2017-07-07 杭州健培科技有限公司 Lung's pneumothorax CT image classification diagnostic methods based on machine learning
CN110619639A (en) * 2019-08-26 2019-12-27 苏州同调医学科技有限公司 Method for segmenting radiotherapy image by combining deep neural network and probability map model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许友彬;李彬;刘霜纯;张鸣生;王立非;田联房;: "基于肺部组织分割的肺功能定量分析系统的开发与设计" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102333A (en) * 2020-09-02 2020-12-18 合肥工业大学 Ultrasonic region segmentation method and system for B-ultrasonic DICOM (digital imaging and communications in medicine) image
CN112102333B (en) * 2020-09-02 2022-11-04 合肥工业大学 Ultrasonic region segmentation method and system for B-ultrasonic DICOM (digital imaging and communications in medicine) image
CN113823413A (en) * 2021-10-22 2021-12-21 上海长征医院 Lung function small airway disease prediction system, method, medium and electronic device
CN116564527A (en) * 2023-07-11 2023-08-08 南京裕隆生物医学发展有限公司 Respiratory health analysis method based on physiological feature deep learning
CN116564527B (en) * 2023-07-11 2023-09-15 南京裕隆生物医学发展有限公司 Respiratory health analysis method based on physiological feature deep learning

Similar Documents

Publication Publication Date Title
CN111192316B (en) Deep learning for arterial analysis and assessment
WO2018205922A1 (en) Methods and systems for pulmonary function test based on diagnostic medical imaging and machine learning
JP5844187B2 (en) Image analysis apparatus and method, and program
US20200184639A1 (en) Method and apparatus for reconstructing medical images
KR20210020990A (en) System and method for lung-volume-gated X-ray imaging
CN109994199A (en) Computer based diagnostic system
CN112368781A (en) Method and system for assessing vascular occlusion based on machine learning
US20060100507A1 (en) Method for evaluation of medical findings in three-dimensional imaging, in particular in mammography
KR20170096088A (en) Image processing apparatus, image processing method thereof and recording medium
CN111598895A (en) Method for measuring lung function index based on diagnostic image and machine learning
Weininger et al. Interplatform reproducibility of CT coronary calcium scoring software
US20100260394A1 (en) Image analysis of brain image data
JP6253085B2 (en) X-ray moving image analysis apparatus, X-ray moving image analysis program, and X-ray moving image imaging apparatus
JP2023512139A (en) Systems and methods for determining radiation parameters
AL-Ghamdi et al. Detection of Dental Diseases through X-Ray Images Using Neural Search Architecture Network.
Hayes et al. High pitch helical CT reconstruction
Sharma et al. Heart disease prediction using convolutional neural network
Išgum et al. Automated aortic calcium scoring on low‐dose chest computed tomography
Abadi et al. Emphysema quantifications with CT scan: Assessing the effects of acquisition protocols and imaging parameters using virtual imaging trials
Badano et al. The stochastic digital human is now enrolling for in silico imaging trials—methods and tools for generating digital cohorts
Kim et al. AI-based computer-aided diagnostic system of chest digital tomography synthesis: Demonstrating comparative advantage with X-ray-based AI systems
SATOH et al. Computer-aided diagnosis system for comparative reading of helical CT images for the detection of lung cancer
EP3588378B1 (en) Method for determining at least one enhanced object feature of an object of interest
Rybak et al. Measurement of the upper respiratory tract aerated space volume using the results of computed tomography
KR102428617B1 (en) Tumor Tracking Method in CT Image and Diagnosis System using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200828