CN117218133A - Lung image processing method and device, electronic equipment and storage medium - Google Patents

Lung image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117218133A
CN117218133A CN202310941660.9A CN202310941660A CN117218133A CN 117218133 A CN117218133 A CN 117218133A CN 202310941660 A CN202310941660 A CN 202310941660A CN 117218133 A CN117218133 A CN 117218133A
Authority
CN
China
Prior art keywords
image
segmentation
lung
images
pulmonary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310941660.9A
Other languages
Chinese (zh)
Inventor
齐守良
王美欢
吴雅楠
赵水清
Original Assignee
东北大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东北大学 filed Critical 东北大学
Priority to CN202310941660.9A priority Critical patent/CN117218133A/en
Publication of CN117218133A publication Critical patent/CN117218133A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure relates to a lung image processing method, and relates to the technical field of medical image processing, including: training a first segmentation model by using a first number of first lung images and corresponding lung vessel label images; performing lung vessel segmentation on a second number of second lung images based on the trained first segmentation model to obtain corresponding first lung vessel segmentation images; selecting the corresponding first lung vessel segmentation image to obtain a selected first lung vessel segmentation image; training a second segmentation model by using the first number of first lung images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung images; and performing pulmonary vessel segmentation on the second lung images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images. The embodiments of the present disclosure may enable segmentation of blood vessels.

Description

Lung image processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of medical image processing, and in particular relates to a lung image processing method and device, electronic equipment and a storage medium.
Background
Medical image analysis is an important field of medical research and clinical application in recent years. Among many medical image processing techniques, image segmentation has been a focus of research as a fundamental and critical technique. Image segmentation enables a physician to more clearly identify and analyze structures and features in an image by dividing the image into a plurality of regions having related characteristics. In this context, it is particularly important to accurately segment the vascular structure from the computed tomography (Computed Tomography, CT) image. Blood vessels are important components of the human circulatory system, and the morphology, thickness, branching and relation with surrounding structures of the blood vessels can be intuitively observed by a doctor through blood vessel segmentation by CT images. This has profound implications for the diagnosis of circulatory diseases such as coronary artery disease, arteriosclerosis, cerebrovascular disease, etc. The accuracy of vessel segmentation directly affects the accuracy and timeliness of diagnosis. Interventional procedures often involve highly elaborate and complex procedures, and knowledge of the structure and location of the blood vessel must be very accurate. Through vessel segmentation, the physician can clearly see the three-dimensional structure of the vessel, which is critical for planning of the surgical path, selection of tools and evaluation of potential risks. Meanwhile, the blood vessel segmentation plays a key role in not only primary diagnosis, but also disease monitoring and evaluation in the treatment process. By comparing the vessel images of the patient at different time points, the physician can observe the changes in the vessel, evaluate the treatment effect and adjust the treatment regimen.
However, using CT images for vessel labeling and segmentation presents a number of difficulties and challenges. (1) The resolution of CT images may limit the visibility of vascular structures, especially for microvasculature. In lower resolution images, small blood vessels may not be clearly displayed, which makes accurate labeling and segmentation difficult. (2) CT images are often accompanied by noise, especially in low dose scans. Noise may mask or obscure the edges of the blood vessel, making it more difficult to accurately label and segment the blood vessel. (3) In CT images, the contrast of the blood vessels with surrounding tissue (e.g., muscle, bone, or other organs) may not be sufficiently pronounced. This makes it difficult for the automatic segmentation algorithm to accurately identify the boundaries of the blood vessel. (4) CT images typically provide information in three dimensions. The complex arrangement and variation of the vessels in three spatial dimensions makes the task of labeling and segmentation more difficult and time consuming. (5) For some complex or ambiguous situations, it may be necessary to manually label the vessel. However, manual labeling is a very time-consuming and labor-intensive task and may be limited by the experience and skill of the operator. (6) In CT images, artifacts may occur for various reasons (e.g., metal implants, patient motion, etc.). These artifacts may negatively impact image quality, further increasing the difficulty of vessel labeling and segmentation.
In recent years, deep learning, in particular Convolutional Neural Networks (CNNs), has shown significant advantages in medical image analysis. By training with a large number of labeled data, the deep learning model is able to learn to identify complex vascular structures and is able to resist noise and artifact interference. However, the acquisition of annotation data is often time consuming and expensive, and medical images often involve privacy concerns, so that only a limited amount of annotation data can be obtained. Semi-supervised learning is therefore gaining wide attention and application in medical image analysis. The main aspects are as follows: (1) medical image segmentation: by training the model with unlabeled image data, the accuracy of the medical image segmentation task may be improved. For example, semi-supervised learning may be used to train a lung segmentation model to help doctors diagnose lung cancer and other diseases better. (2) medical image classification: and more samples can be provided for medical image classification tasks by utilizing unlabeled data, so that the accuracy of the model is improved. For example, semi-supervised learning may be used to train a mammography image classifier to help doctors diagnose diseases such as breast cancer. (3) medical image reconstruction: through semi-supervised learning, unlabeled data can be used to train a medical image reconstruction model, thereby improving the quality of medical images. For example, semi-supervised learning may be used to train the MRI reconstruction model to improve the resolution and quality of MRI images. In summary, the application of semi-supervised learning in the field of medical images may improve the diagnostic accuracy and efficiency of doctors, thereby better serving the health of patients.
Accurate and efficient segmentation of pulmonary vessel trees is a challenging task. Because the pulmonary vascular tree has a complex structure, different diameters and more bifurcation, the marking of the tiny blood vessels is particularly difficult, so that the quantity of marked data is small and the quality is low. Furthermore, chronic Obstructive Pulmonary Disease (COPD) is caused by the influence of pulmonary blood vessels, and the typing accuracy rate of COPD is to be improved.
Disclosure of Invention
The disclosure provides a lung image processing method and device, electronic equipment and a storage medium technical scheme.
According to an aspect of the present disclosure, there is provided a lung image processing method including:
training a first segmentation model by using a first number of first lung images and corresponding lung vessel label images; performing lung vessel segmentation on a second number of second lung images based on the trained first segmentation model to obtain corresponding first lung vessel segmentation images;
selecting the corresponding first lung vessel segmentation image to obtain a selected first lung vessel segmentation image; training a second segmentation model by using the first number of first lung images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung images; and performing pulmonary vessel segmentation on the second lung images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images.
Preferably, the processing method further comprises: acquiring performance indexes of the second segmentation model;
if the performance index is lower than the set performance index, selecting the corresponding second pulmonary blood vessel segmentation image to obtain a selected second pulmonary blood vessel segmentation image;
training a third segmentation model by using the first number of first lung images and corresponding lung vessel label images, the selected second lung vessel segmentation images and corresponding second lung images;
performing pulmonary vessel segmentation on the second lung images with the fourth number remaining after the selection based on the trained third segmentation model to obtain corresponding second pulmonary vessel segmentation images;
repeating the above process until the performance index of the final segmentation model is higher than or equal to the set performance index; and/or the number of the groups of groups,
the method for selecting the corresponding first pulmonary blood vessel segmentation image to obtain the selected first pulmonary blood vessel segmentation image comprises the following steps: .
Obtaining a segmentation index corresponding to each first pulmonary vessel segmentation image and a first set segmentation index;
selecting the corresponding first pulmonary blood vessel segmentation image based on the segmentation index corresponding to each first pulmonary blood vessel segmentation image and the first set segmentation index to obtain a selected first pulmonary blood vessel segmentation image; and/or the number of the groups of groups,
The method for selecting the corresponding first pulmonary blood vessel segmentation image based on the segmentation index corresponding to each first pulmonary blood vessel segmentation image and the first set segmentation index to obtain the selected first pulmonary blood vessel segmentation image comprises the following steps:
if the segmentation index corresponding to the first pulmonary blood vessel segmentation image is greater than or equal to the first set segmentation index, determining the first pulmonary blood vessel segmentation image as a selected first pulmonary blood vessel segmentation image; and/or the number of the groups of groups,
the method for selecting the corresponding second pulmonary blood vessel segmentation image to obtain the selected second pulmonary blood vessel segmentation image comprises the following steps:
obtaining a segmentation index corresponding to each second pulmonary vessel segmentation image and a second set segmentation index;
selecting the corresponding second pulmonary blood vessel segmentation image based on the segmentation index corresponding to each second pulmonary blood vessel segmentation image and the second set segmentation index to obtain a selected second pulmonary blood vessel segmentation image; and/or the number of the groups of groups,
the method for selecting the corresponding second pulmonary blood vessel segmentation image based on the segmentation index corresponding to each second pulmonary blood vessel segmentation image and the second set segmentation index to obtain the selected second pulmonary blood vessel segmentation image comprises the following steps:
And if the segmentation index corresponding to the second pulmonary blood vessel segmentation image is greater than or equal to the second set segmentation index, determining the second pulmonary blood vessel segmentation image as the selected second pulmonary blood vessel segmentation image.
Preferably, the method for determining the lung vessel label image corresponding to the first number of first lung images comprises the following steps:
respectively carrying out first-time pulmonary vessel segmentation on the first number of first pulmonary images by using a trained preset pulmonary vessel convolution segmentation model to obtain corresponding first pulmonary vessel label images;
performing second lung vessel segmentation on the first number of first lung images by using a machine learning segmentation model to obtain corresponding second lung vessel label images; wherein the caliber of the pulmonary blood vessel in the second pulmonary blood vessel label image is smaller than that in the first pulmonary blood vessel label image;
respectively fusing the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain pulmonary blood vessel label images corresponding to the first number of first pulmonary images; and/or the number of the groups of groups,
the method for respectively fusing the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain the pulmonary blood vessel label images corresponding to the first number of first pulmonary images comprises the following steps:
Respectively carrying out position superposition on the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain pulmonary blood vessel label images corresponding to the first number of first pulmonary images; and/or the number of the groups of groups,
the method for obtaining the corresponding second pulmonary vessel label image by using the machine learning segmentation model to respectively carry out the second pulmonary vessel segmentation on the first number of first pulmonary images comprises the following steps:
respectively carrying out multi-scale representation on the first lung images of the first number to obtain corresponding multi-scale lung images;
and respectively extracting the characteristics of the plurality of lung images, and classifying the extracted characteristics by using a preset classifier to obtain a corresponding second lung vessel label image.
Preferably, in the process of training the segmentation model, calculating the loss of the segmentation model, and adjusting the network parameters of the segmentation model by using the loss; and/or the number of the groups of groups,
the method for calculating the loss of the segmentation model comprises the following steps: obtaining a Dice loss function and a cross entropy loss function;
calculating a first loss value and a second loss value of the Dice loss function and the cross entropy loss function respectively;
configuring the sum of the first loss value and the second loss value as a loss value of the segmentation model; and/or the number of the groups of groups,
The method for calculating the loss function of the segmentation model further comprises the following steps: setting the Dice loss function and the loss adjustment of the cross entropy loss function;
determining, based on the loss adjustment, whether the segmentation model computes a loss by selecting the Dice loss function and the cross entropy loss function; and/or the number of the groups of groups,
the method for setting the Dice loss function and the loss adjustment of the cross entropy loss function comprises the following steps:
calculating a plurality of difference values of segmentation pixel points between a lung blood vessel segmentation image corresponding to a lung blood vessel segmentation image to be detected and a lung blood vessel label image corresponding to the lung blood vessel segmentation image output by the segmentation model; calculating the average value among the plurality of difference values;
if the mean value is smaller than a set value, the segmentation model does not calculate loss; otherwise, the segmentation model calculates a loss by selecting the Dice loss function and the cross entropy loss function.
Preferably, before the training of the first segmentation model using the first number of first lung images and their corresponding lung vessel label images, the training method further comprises: acquiring a first number of first lung images, a corresponding lung vessel label image and a second number of second lung images; and/or the number of the groups of groups,
Before the training of the first segmentation model using the first number of first lung images and their corresponding lung vessel label images, further comprising: performing lung field segmentation on the first number of first lung images and the second number of second lung images respectively to obtain corresponding first lung field images and second lung field images;
further, training the first segmentation model by using a first number of first lung field images and corresponding lung vessel label images; performing pulmonary vessel segmentation on a second number of second lung field images based on the trained first segmentation model to obtain corresponding first pulmonary vessel segmentation images;
selecting the corresponding first lung vessel segmentation image to obtain a selected first lung vessel segmentation image; training a second segmentation model by using the first number of first lung field images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung field images; and performing pulmonary vessel segmentation on the second lung field images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images.
Preferably, the processing method further comprises:
Acquiring an inhalation phase image, an exhalation phase image and a plurality of set threshold intervals to be diagnosed;
the processing method is utilized to respectively conduct blood vessel segmentation on the inhalation phase image and the exhalation phase image, and corresponding pulmonary blood vessel images are obtained;
determining a corresponding parameter response graph based on the inhalation phase image, the exhalation phase image and a plurality of set threshold intervals;
and respectively utilizing the corresponding pulmonary vessel images to carry out pulmonary vessel and/or set airway elimination correction on the parameter response image, and carrying out COPD typing diagnosis of the chronic obstructive pulmonary disease.
Preferably, when the corresponding expiratory phase image of the inspiratory phase image to be diagnosed is absent or the corresponding inspiratory phase image of the gas phase image to be diagnosed is absent, the inspiratory phase image is synthesized into a corresponding synthetic expiratory phase image or the expiratory phase image is synthesized into a corresponding synthetic inspiratory phase image by using a preset synthesizer;
the processing method is utilized to respectively conduct blood vessel segmentation on the inspiratory phase image and the corresponding synthetic expiratory phase image so as to obtain a corresponding first pulmonary blood vessel image; or, respectively carrying out blood vessel segmentation on the exhalation phase image and the corresponding synthetic inhalation phase image by using the processing method to obtain a corresponding second pulmonary blood vessel image;
Determining a first parameter response diagram based on the vapor phase image, the corresponding synthesized vapor phase image and a plurality of set threshold intervals; or determining a second parameter response diagram according to the breathing phase image, the corresponding synthetic breathing phase image and a plurality of set threshold intervals;
carrying out pulmonary vessel and/or set airway elimination correction on the first parameter response map by utilizing the corresponding first pulmonary vessel image to obtain a corrected first parameter response map; or, performing pulmonary vessel and/or set airway elimination correction on the second parameter response map by using the corresponding second pulmonary vessel image to obtain a corrected second parameter response map;
based on the corrected first parameter response map, performing COPD typing diagnosis of chronic obstructive pulmonary disease; or, based on the corrected second parameter response map, performing COPD typing diagnosis of chronic obstructive pulmonary disease.
According to an aspect of the present disclosure, there is provided a lung image processing apparatus including:
a first processing unit for training the first segmentation model using a first number of first lung images and their corresponding lung vessel label images; performing lung vessel segmentation on a second number of second lung images based on the trained first segmentation model to obtain corresponding first lung vessel segmentation images;
The second processing unit is used for selecting the corresponding first pulmonary blood vessel segmentation image to obtain a selected first pulmonary blood vessel segmentation image; training a second segmentation model by using the first number of first lung images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung images; and performing pulmonary vessel segmentation on the second lung images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images.
According to an aspect of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the above-described lung image processing method is performed.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described lung image processing method.
In the embodiment of the disclosure, a lung image processing method and device, electronic equipment and a storage medium technical scheme are provided to solve the problems of small quantity of marked data and low quality caused by complex structure, different diameters and more bifurcation of a lung blood vessel tree, especially very difficult marking of tiny blood vessels, and further provide a basis for accurate determination of COPD phenotypes.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
FIG. 1 shows a flowchart of a lung image processing method according to an embodiment of the present disclosure;
fig. 2 illustrates a specific flow and network architecture corresponding to a lung image processing method according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of fusing a first pulmonary vessel label image and a second pulmonary vessel label image in accordance with an embodiment of the disclosure;
FIG. 4 illustrates a flow chart of a method of synthesizing the inspiratory phase images into corresponding synthesized expiratory phase images or synthesizing the expiratory phase images into corresponding synthesized inspiratory phase images using a preset synthesizer according to an embodiment of the present disclosure;
fig. 5 illustrates a network configuration diagram corresponding to a preset synthesizer according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of an electronic device 800, shown in accordance with an exemplary embodiment;
Fig. 7 is a block diagram illustrating an electronic device 1900 according to an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure.
In addition, the disclosure further provides a lung image processing device, an electronic device, a computer readable storage medium and a program, which can be used to implement any of the lung image processing methods provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
FIG. 1 shows a flowchart of a lung image processing method according to an embodiment of the present disclosure; 2 illustrates a specific flow and a network architecture corresponding to a lung image processing method according to an embodiment of the present disclosure. As shown in fig. 1-2, the lung image processing method includes: step S101: training a first segmentation model by using a first number of first lung images and corresponding lung vessel label images; performing lung vessel segmentation on a second number of second lung images based on the trained first segmentation model to obtain corresponding first lung vessel segmentation images; step S102: selecting the corresponding first lung vessel segmentation image to obtain a selected first lung vessel segmentation image; training a second segmentation model by using the first number of first lung images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung images; and performing pulmonary vessel segmentation on the second lung images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images. The method solves the problems of complex structure, different diameters and more bifurcation of the pulmonary vascular tree, particularly the problem of low quality and less quantity of marked data caused by very difficult marking of tiny blood vessels, and further provides a basis for determining the accurate COPD phenotype.
In the disclosed and other possible embodiments, the first and second lung images may be configured as CT images, DR images, MRI images, ultrasound images, PET images, CT-PET images, or other medical images. Further, the CT image, DR image, MRI image, ultrasound image, PET image, CT-PET image or other medical image configures a CT image, DR image, MRI image, ultrasound image, PET image, CT-PET image or other medical image corresponding to the chest (lung) in an inhalation state or/and an exhalation state. In addition, CT images, DR images, MRI images, ultrasound images, PET images, CT-PET images, or other medical images corresponding to the chest (lung) at the time of deep inhalation or/and at the time of deep exhalation may be arranged in the inhalation state or/and the exhalation state.
In a disclosed embodiment, before the training of the first segmentation model using the first number of first lung images and their corresponding pulmonary vessel label images, further comprising: a first number of first lung images, a corresponding lung vessel label image, and a second number of second lung images are acquired.
In a disclosed embodiment, before the training of the first segmentation model using the first number of first lung images and their corresponding pulmonary vessel label images, further comprising: performing lung field segmentation on the first number of first lung images and the second number of second lung images respectively to obtain corresponding first lung field images and second lung field images; further, training the first segmentation model by using a first number of first lung field images and corresponding lung vessel label images; performing pulmonary vessel segmentation on a second number of second lung field images based on the trained first segmentation model to obtain corresponding first pulmonary vessel segmentation images; selecting the corresponding first lung vessel segmentation image to obtain a selected first lung vessel segmentation image; training a second segmentation model by using the first number of first lung field images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung field images; and performing pulmonary vessel segmentation on the second lung field images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images.
In the disclosed embodiments and other possible embodiments, the method for performing lung field segmentation on the first number of first lung field images and the second number of second lung field images to obtain corresponding first lung field images and second lung field images includes: and acquiring a preset lung field segmentation model, and respectively carrying out lung field segmentation on the first lung images of the first number and the second lung images of the second number by using the preset lung field segmentation model to obtain corresponding first lung field images and second lung field images. And simultaneously, performing lung field segmentation on the first number of first lung images and the second number of second lung images respectively to obtain corresponding first lung field images and second lung field images, and configuring the first lung images and the second lung images in all the subsequent processing methods into corresponding first lung field images and second lung field images respectively.
In the disclosed and other possible embodiments, the preset lung field segmentation model may be configured as a preset lung field segmentation model based on a U-Net convolutional neural network, or UNETR convolutional neural network, or Swin UNETR convolutional neural network, or nnU-Net convolutional neural network, or a modification thereof.
Step S101: training a first segmentation model by using a first number of first lung images and corresponding lung vessel label images; and performing pulmonary vessel segmentation on the second number of second lung images based on the trained first segmentation model to obtain corresponding first pulmonary vessel segmentation images.
For example, the first number is configured as 12 cases and the second number is configured as 168 cases. Namely, 12 cases have a first lung image of the pulmonary vessel label image; 168 second lung images without a pulmonary vessel label image.
In the disclosed and other possible embodiments, the 12 first lung images are from a data set provided by a VESSEL12 (https:// VESSEL12.Grand-challenge. Org /). This challenge provides some downloaded data for chest CT scans. Each downloaded file contains a CT scan stored in Meta (or MHD/RAW) format. This format stores the image as an ASCII readable header file with extension mhd and a separate binary file with image data of extension raw. Approximately half of the first lung images of 12 cases contained abnormalities such as emphysema, nodules, and pulmonary embolism, and the maximum slice pitch was 1 mm. Three scans with labels are downloaded. The mask and vessel annotation csv files are contained in the download for each of these scans. Each annotated CSV file contains a single scanned list of marker points. Wherein each point has been marked independently by three annotators. Only three points where annotators agree on the tag are included. Wherein the annotation file is provided in csv format. The format of each point is "x, y, z, label" (x, y, z) indicating the position information of the label in the first lung image of each case, and a voxel with label 1 indicates a blood vessel, and a label 0 indicates classifying the voxel as a non-blood vessel (a lung vessel label image corresponding to the 12 cases of the first lung image).
In the disclosed and other possible embodiments, 168 second lung images were from a affiliated first hospital of the university of Guangzhou medical science, configured as non-enhanced CT images, all CT scans were 1.0mm thick, with a slice size of 512 x 512, and a storage format of dicom.
In the disclosed embodiment, as well as other possible embodiments, the one in FIG. 2First lung with first number (12 cases)The image and the corresponding pulmonary vessel label image perform full supervision training on a first segmentation Model (Teacher Model) to obtain an initial Teacher Model (the first segmentation Model after Teacher Model training). Then +.>And carrying out pulmonary blood vessel segmentation on a second number (168 cases) of second lung images based on the trained first segmentation model to obtain corresponding first pulmonary blood vessel segmentation images (168 cases of first pseudo labels).
Embodiments of the present disclosure use 12 cases of lung CT scan data and corresponding labels for full supervised training (training a first segmentation model with a first number of first lung images and their corresponding pulmonary vessel label images) and 10 cases of data for testing. First, a lung region (lung field) is automatically segmented from each CT image (first lung image), a corresponding first lung field image is obtained, and a blood vessel gold standard (lung vessel label image) in the lung region is obtained. In order to solve the problem of category imbalance, the data in the minimum bounding box of the lung field in the first lung field image is preserved.
In an embodiment of the present disclosure, the first number of first lung images or first lung field images, respectively, are cropped to a set size. For example, 12 training data were cropped to a set size of (234, 269, 336), (301, 266, 343), (320, 235, 357), (238, 277, 323), (315, 270, 383), (270, 239, 345), (250, 259, 368), (282, 264, 403), (323, 286, 426), (296, 295, 418), (286, 254, 372), (314, 258, 399), respectively, and then the cropped first lung image or first lung field image to the set size was resampled to the median voxel spacing (1×0.74×0.74mm 3) of all the first lung images or first lung field images. Then, the first lung image or the first lung field image is a set cuboid of 128×112×160 in size, and is used for training the network. The training round number can be set to 1000 initially, and the number of input samples processed by the network at one time is 2 each time of training iteration. The initial learning rate may be set to 0.01, the optimizer may use an SGD optimizer, the momentum may be set to 0.99, and the weight decay may be set to 3e-5. And after the full-supervision training is finished, a teacher model is obtained and is used for the follow-up semi-supervision iterative training.
Step S102: in FIG. 2Selecting the corresponding first lung vessel segmentation image to obtain a selected first lung vessel segmentation image; training a second segmentation model by using the first number of first lung images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung images; and performing pulmonary vessel segmentation on the second lung images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images. Repeating the above process until the performance index of the final segmentation model is higher than or equal to the set performance index.
In the disclosed embodiments and other possible embodiments, the corresponding first pulmonary blood vessel segmented image is selected to obtain a selected first pulmonary blood vessel segmented image (e.g., 40 first pseudo-labels); training a second segmentation Model (Student Model) by using the first number of first lung images and corresponding lung blood vessel label images (12 cases), the selected first lung blood vessel segmentation images and corresponding second lung images (40 cases); and performing pulmonary vessel segmentation on the selected third number of second pulmonary images based on the trained second segmentation model (Student becomes the new teacher, the student model is converted into a new teacher model), thereby obtaining corresponding second pulmonary vessel segmentation images.
In the disclosed and other possible embodiments, the first and/or second segmentation models may be configured as preset vessel segmentation models based on a U-Net convolutional neural network, or UNETR convolutional neural network, or Swin UNETR convolutional neural network, or nnU-Net convolutional neural network, or modifications thereof.
The first segmentation model and/or the second segmentation model (teacher model and/or student model) may be configured and may also be a preset vessel segmentation model as shown in fig. 2. Fig. 2 illustrates a specific flow and a network architecture corresponding to a lung image processing method according to an embodiment of the disclosure, where the preset vessel segmentation model (network architecture) is configured as an nnFomer model. The nFormer model is mainly divided into 3 blocks: the decoder module (encoding module), the bottleneck module, the decoder module (decoding module) reserves the U-Net structure. The nnFormer combines the convolution and the mixed model of the self-attention mechanism, fully plays the advantages of the convolution and the self-attention mechanism, and provides a method with high calculation efficiency to capture the dependency relationship among the slices. In the nnFormer encoder, a lightweight convolutional embedded layer is added for encoding pixel-level spatial information into low-level but high-resolution 3D features. Then, after embedding the blocks, the transform and the convolved downsampling blocks are used alternately to completely mix the long-term dependent and high-level, hierarchical object concepts, thereby improving the generalization ability and robustness of the learning representation. Furthermore, nnFormer introduced V-MSA to learn the representation on 3D locality, which was then aggregated to produce predictions of overall data.
Among other things, in the disclosed and other possible embodiments, an encoder module (encoding module) includes: embedding layers (Embedding layers), 2 Local Self-attention layers (Layer), downsampling (downsampling), and a sequence or jump connection. A bottleneck module (bottleneck module) comprising: 2 Global attention layers (Global Self-attention Layer), downsampling (Down-sampling), 2 Global attention layers (Global Self-attention Layer), upsampling (Up-sampling), 2 Global attention layers (Global Self-attention Layer); decoder module (decoding module), comprising: upsampling (Up-sampling), 2 Local attention layers (Local Self-attention layers), and expanding layers (expanding layers).
Specifically, the head and tail of the preset blood vessel segmentation model (nnFomer model) are respectively configured into an Embedding Layer (Embedding Layer) and an expanding Layer (expanding Layer), and the method further comprises the following steps: 2 Local attention layers (Local Self-sampling Layer), downsampling (downsampling), 2 Global attention layers (Global Self-sampling Layer), upsampling (Up-sampling Layer), 2 Global attention layers (Global Self-sampling Layer).
In a disclosed embodiment, the processing method further includes: acquiring performance indexes of the second segmentation model; if the performance index is lower than the set performance index, selecting the corresponding second pulmonary blood vessel segmentation image to obtain a selected second pulmonary blood vessel segmentation image; training a third segmentation model by using the first number of first lung images and corresponding lung vessel label images, the selected second lung vessel segmentation images and corresponding second lung images; performing pulmonary vessel segmentation on the second lung images with the fourth number remaining after the selection based on the trained third segmentation model to obtain corresponding second pulmonary vessel segmentation images; repeating the above process until the performance index of the final segmentation model is higher than or equal to the set performance index.
In the disclosed embodiments and other possible embodiments, the performance index of the second partition model may be configured as one or several performance indexes of a corresponding position value, iou value, sensitivity (Sensitivity), precision (Precision), and the like. Meanwhile, the set performance index may be configured as one or more of corresponding set performance indexes such as a set price value, a set Iou value, a set Sensitivity (Sensitivity), a set Precision (Precision), and the like. Meanwhile, the person skilled in the art can configure the values of the set performance indexes such as the set price value, the set Iou value, the set Sensitivity (Sensitivity), the set Precision (Precision) and the like as required.
In the disclosed embodiments and other possible embodiments, if the performance index is lower than the set performance index, the corresponding second pulmonary blood vessel segmentation image is selected, and a selected second pulmonary blood vessel segmentation image (40 second pseudo labels) is obtained. Further, training a second segmentation Model (Student Model) using the first number of first lung images and their corresponding lung vessel label images (12 cases), the selected first lung vessel segmentation images and their corresponding second lung images (40 cases); and performing pulmonary vessel segmentation on the second lung images (128-40 cases=68 cases) with the third number remaining after selection based on the trained second segmentation model (Student becomes the new teacher, the student model is converted into a new teacher model), so as to obtain corresponding second pulmonary vessel segmentation images.
For example, 128 cases of the corresponding second pulmonary vessel segmentation images are selected to obtain 40 cases of selected second pulmonary vessel segmentation images; training a third segmentation model or retraining the first segmentation model with the first number (12) of first lung images and their corresponding lung vessel label images, the selected 40 second lung vessel segmentation images and their corresponding second lung images; performing pulmonary vessel segmentation on the second lung images with the fourth number (68 cases) left after selection based on the trained third segmentation model or the retrained first segmentation model to obtain corresponding second pulmonary vessel segmentation images; repeating the above process until the performance index of the final segmentation model is higher than or equal to the set performance index.
In the disclosed embodiment and other possible embodiments, the third segmentation model and the segmentation model that repeats the above-described process may be configured as a preset vessel segmentation model based on a U-Net convolutional neural network, or UNETR convolutional neural network, or Swin UNETR convolutional neural network, or nnU-Net convolutional neural network, or a modification thereof, or as shown in fig. 2.
In a disclosed embodiment, the method for selecting the corresponding first pulmonary vessel segmentation image to obtain a selected first pulmonary vessel segmentation image includes: obtaining a segmentation index corresponding to each first pulmonary vessel segmentation image and a first set segmentation index; and selecting the corresponding first pulmonary blood vessel segmentation image based on the segmentation index corresponding to each first pulmonary blood vessel segmentation image and the first set segmentation index to obtain a selected first pulmonary blood vessel segmentation image.
In the disclosed embodiments and other possible embodiments, the segmentation indicator may be configured as one or several performance indicators such as a corresponding Dice value, iou value, sensitivity (Sensitivity), precision (Precision), and the like. Meanwhile, the first set division index, the second set division index or the set division index corresponding to the division model in the repeated process may be configured as one or more set performance indexes such as a corresponding set Dice value, a set Iou value, a set Sensitivity (Sensitivity), a set Precision (Precision), and the like. Meanwhile, the person skilled in the art can configure the values of the set performance indexes such as the set price value, the set Iou value, the set Sensitivity (Sensitivity), the set Precision (Precision) and the like as required.
In a disclosed embodiment, the method for selecting the corresponding first pulmonary blood vessel segmented image based on the segmentation index corresponding to each first pulmonary blood vessel segmented image and the first set segmentation index to obtain a selected first pulmonary blood vessel segmented image includes: and if the segmentation index corresponding to the first pulmonary blood vessel segmentation image is greater than or equal to the first set segmentation index, determining the first pulmonary blood vessel segmentation image as a selected first pulmonary blood vessel segmentation image.
For example, the segmentation index is configured as a Precision (Precision), and the first set segmentation index is configured as a set Precision (Precision) value, and a selected first pulmonary vessel segmentation image having a Precision or average set Precision greater than the set Precision value or average set Precision value is determined from the first pulmonary vessel segmentation images. Wherein, the person skilled in the art can configure the set precision value or the average set precision value according to actual needs, for example, 0.9.
More specifically, the first iteration uses Precision (Precision) as a selection criterion, and selects an average Precision value > 0.9 (a set Precision value or an average set Precision value), and the pseudo tag of the top 40 rank is used as a reliable pseudo tag (a selected first pulmonary vessel segmentation image), so that the ratio of the tag to the pseudo tag is close to 1 to 4. Finally, retraining with 12 labeled images (first number of first lung images and corresponding lung vessel label images) and 40 unlabeled images and pseudo labels (selected first lung vessel segmentation images and corresponding second lung images) to obtain a student model, and completing the first iterative training.
Also, in a disclosed embodiment, the method of selecting the corresponding second pulmonary vessel segmentation image to obtain a selected second pulmonary vessel segmentation image includes: obtaining a segmentation index corresponding to each second pulmonary vessel segmentation image and a second set segmentation index; and selecting the corresponding second pulmonary blood vessel segmentation image based on the segmentation index corresponding to each second pulmonary blood vessel segmentation image and the second set segmentation index to obtain a selected second pulmonary blood vessel segmentation image.
Also, in the disclosed embodiment, the method for selecting the corresponding second pulmonary blood vessel segmented image based on the segmentation index corresponding to each second pulmonary blood vessel segmented image and the second set segmentation index to obtain a selected second pulmonary blood vessel segmented image includes: and if the segmentation index corresponding to the second pulmonary blood vessel segmentation image is greater than or equal to the second set segmentation index, determining the second pulmonary blood vessel segmentation image as the selected second pulmonary blood vessel segmentation image.
In the disclosed embodiments and other possible embodiments, the segmentation index corresponding to the second pulmonary vessel segmentation image may be configured to be a precision or average set precision and a Dice value or average Dice value. At this time, the second set division index corresponding to the division index is configured as a set precision value or an average set precision value, and a set Dice value or an average set Dice value. Wherein, the person skilled in the art can configure the set precision value or the average set precision value according to actual needs, for example, 0.95. Likewise, the person skilled in the art may configure the set Dice value or the average set Dice value according to the actual need, for example 0.85.
For example, using the student model (second segmentation model) trained in the first iteration as a teacher model (first segmentation model), the remaining 128 cases (the remaining third number) of second lung images are predicted, and 128 pseudo tags (second lung vessel segmentation images) are obtained. As in the first iteration process, a reliable pseudo tag (selected second pulmonary vessel segmentation image) is selected, a Dice value or an average Dice value is selected according to the accuracy or the average set accuracy >0.95 (set accuracy value or average set accuracy value) and the Dice value or the average Dice value >0.85 (set Dice value or average set Dice value), 40 cases of reliable pseudo tags (selected second pulmonary vessel segmentation image) are selected, the first segmentation model is continuously retrained or the third segmentation model is trained, the second iteration is completed, and the optimal segmentation effect (preset segmentation index) is obtained.
The above embodiment performs only two iterations, and in the disclosed embodiment and other possible embodiments, a person skilled in the art may perform multiple iterations according to the above method to obtain a more optimal segmentation effect (preset segmentation index).
In the disclosed and other possible embodiments, a pseudo tag (second pseudo tag or selected first lung vessel segmentation image) policy of choice reliability is added in step S102. In step S101, after the completion of the full-monitoring training, a teacher model is obtained, and 168 cases of data are predicted using the teacher model, whereby 168 pseudo tags (first pulmonary blood vessel segmentation image before selection) are obtained. Then, a reliability pseudo tag (selected second pseudo tag or selected first lung vessel segmentation image) is selected as follows: the checkpoints are saved every 100 rounds during the full supervised training process (network parameters or network weights of the model are saved after each training), and the checkpoint with the highest validation set Dice value is saved as the bestpdel (best model). After the training is completed, for each case of unlabeled CT scan images (second number of second lung images), the pseudo-label (first lung vessel segmentation image before selection) and the prediction result of bestpdel are predicted using the previously stored checkpoints to calculate an evaluation index (performance index) as a selection criterion. The larger the average evaluation index is, the higher the predicted overlap ratio of the pseudo tag is, namely, the more stable the pseudo tag is in the training process, and the more reliable the quality of the pseudo tag is. Because the research has finer segmentation task and is intended to be applied to quantitative index analysis of COPD diseases, more predicted results are ensured to be true blood vessels, namely, the fewer false positive is, the better.
In a disclosed embodiment, a method of determining a pulmonary vessel label image corresponding to the first number of first lung images includes: respectively carrying out first-time pulmonary vessel segmentation on the first number of first pulmonary images by using a trained preset pulmonary vessel convolution segmentation model to obtain corresponding first pulmonary vessel label images; performing second lung vessel segmentation on the first number of first lung images by using a machine learning segmentation model to obtain corresponding second lung vessel label images; wherein the caliber of the pulmonary blood vessel in the second pulmonary blood vessel label image is smaller than that in the first pulmonary blood vessel label image; and respectively fusing the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain pulmonary blood vessel label images corresponding to the first number of first pulmonary images.
In a disclosed embodiment, the method for performing a second pulmonary vessel segmentation on the first number of first pulmonary images by using a machine learning segmentation model to obtain corresponding second pulmonary vessel label images includes: respectively carrying out multi-scale representation on the first lung images of the first number to obtain corresponding multi-scale lung images; and respectively extracting the characteristics of the plurality of lung images, and classifying the extracted characteristics by using a preset classifier to obtain a corresponding second lung vessel label image.
In the disclosed embodiments and other possible embodiments, the trained preset pulmonary vessel convolution segmentation model may be configured as a previously proposed pulmonary vessel convolution segmentation model CE-NC-VesselSegNet, and the pulmonary vessel convolution segmentation model CE-NC-VesselSegNet is used to segment a crude vessel, so as to obtain a first pulmonary vessel label image. Simultaneously, firstly, respectively carrying out multi-scale representation on the first number of first lung images by utilizing a Gaussian pyramid, scaling 6 scales, then extracting features from the multi-scale lung images by utilizing an extracted feature model (for example, k-means), and reserving a set number (for example, 34) of features from the extracted features; and then, calculating a feature vector for each pixel based on the reserved set number of features, inputting the calculated feature vector corresponding to each pixel into a classifier (for example, logistic regression) to divide the blood vessel probability, obtaining the probability of all pixel points, dividing the complete blood capillaries, and obtaining a corresponding second pulmonary blood vessel label image.
Fig. 3 shows a schematic diagram of fusing a first pulmonary vessel label image and a second pulmonary vessel label image according to an embodiment of the disclosure. As shown in fig. 3, in the disclosed embodiment, the method for respectively fusing the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain pulmonary blood vessel label images corresponding to the first number of first pulmonary blood vessel images includes: and respectively carrying out position superposition on the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain pulmonary blood vessel label images corresponding to the first number of first pulmonary images. Wherein the first pulmonary vessel label image and the second pulmonary vessel label image are from the same first pulmonary image.
In the disclosed embodiments and other possible embodiments, the whole thin blood vessel result (the second lung blood vessel label image) obtained by dividing each first lung image is fused with the thick blood vessel divided by each first lung image by using the model CE-NC-VesselSegNet, and after the fusion, it can be ensured that the thick blood vessel part does not contain non-vascular tissues such as airway walls, so as to obtain lung blood vessel label images corresponding to the first number of first lung images. In addition, one skilled in the art may configure the pulmonary vessel convolution segmentation model as other existing pulmonary vessel segmentation models as desired.
In the disclosed embodiments, the loss of the segmentation model is calculated during the training of the segmentation model, and the network parameters of the segmentation model are adjusted by using the loss.
In a disclosed embodiment, the method of calculating the loss of the segmentation model includes: obtaining a Dice loss function and a cross entropy loss function; calculating a first loss value and a second loss value of the Dice loss function and the cross entropy loss function respectively; configuring the sum of the first loss value and the second loss value as a loss value of the segmentation model; and/or, the method for calculating the loss function of the segmentation model further comprises: setting the Dice loss function and the loss adjustment of the cross entropy loss function; based on the loss adjustment, it is determined whether the segmentation model calculates a loss by selecting the Dice loss function and the cross entropy loss function.
In a disclosed embodiment, the method for setting the Dice loss function and the loss adjustment of the cross entropy loss function includes: calculating a plurality of difference values of segmentation pixel points between a lung blood vessel segmentation image corresponding to a lung blood vessel segmentation image to be detected and a lung blood vessel label image corresponding to the lung blood vessel segmentation image output by the segmentation model; calculating the average value among the plurality of difference values; if the mean value is smaller than a set value, the segmentation model does not calculate loss; otherwise, the segmentation model calculates a loss by selecting the Dice loss function and the cross entropy loss function.
In the disclosed embodiment, as well as other possible embodiments, the loss function is defined as follows:
Loss=I(yn,y^n)Loss Dice +I(yn,y^n)Loss crossentropy
the present disclosure adds a calculation condition (loss adjustment) before a Dice loss function (diculos) and a cross entropy loss function (CEloss), yn represents a predicted value or a segmentation value (a pulmonary vessel segmentation image) of a segmentation pixel point, y n represents a gold standard (a pulmonary vessel label image) of the segmentation pixel point, if a plurality of differences i= |yn of the segmentation pixel point between a pulmonary vessel segmentation image corresponding to a to-be-segmented pulmonary vessel image output by the segmentation model and the pulmonary vessel label image corresponding thereto, y n| < a set value T, i=0, otherwise, the set value i=1. The set point T of the present disclosure may be configured to be 0.1. By adding loss adjustment, the network (segmentation model) is more concerned about the difficulty points, namely the tail end point and the bifurcation point of the pulmonary blood vessel, so that the segmentation accuracy is improved.
In the disclosed embodiment and other possible embodiments, the 2-iteration process is divided into three phases in total: 1. the supervised pre-training is performed to fully train on a first number of first lung images and their corresponding lung vessel label images to obtain an initial teacher model (trained first segmentation model). 2. Generating pseudo labels, and segmenting the pulmonary blood vessels in all second number of second pulmonary images without labels by using an initial teacher model (a first segmentation model after training) to obtain corresponding first pulmonary blood vessel segmentation images (first pseudo labels). 3. Retraining. The first number of first lung images and their corresponding lung vessel label images, the selected first lung vessel segmentation image and their corresponding second lung images are mixed with the label image and the unlabeled image and their pseudo labels, whereupon a student model (second segmentation model) is retrained and iterated as described above until the best results.
In order to find a fully supervised training network more suitable for vessel segmentation, the network to be employed herein is compared with classical and latest segmentation networks, including UNETR, swinUNETR, nnU-Net.
UNETR is a method of using ViT as its encoder, without relying on a CNN-based feature extractor. The architecture utilizes pure transgenes as encoders to learn the sequential representation of the input quantities and effectively capture global multi-scale information. While following the successful "U-shaped" network design of the encoder and decoder, the Transformers encoder is directly connected to the decoder through jump connections of different resolutions to calculate the final semantic segment output. UNETR exhibits better accuracy and efficiency in different medical image segmentation tasks.
Swin Transformers is proposed as a hierarchical visual transducer, calculating self-attention in an efficient shifted window partitioning scheme. Thus, swin Transformers is suitable for a variety of downstream tasks in which extracted multi-scale features can be utilized for further processing. Next, swin unitr is proposed, which uses a U-network with Swin transformers as encoder and connects it to CNN-based decoders with different resolutions by means of a jump connection. This network verifies the effectiveness of the method in a Multi-mode 3D brain tumor segmentation task of Multi-modal Brain Tumor Segmentation Challenge (BraTS) 2021.
In Table 1, the performance of UNETR, swin UNETR, nnU-Net and nnFormer are compared. All comparative models were evaluated using the same training, validation and test set as the nnFormer full supervision. The nnFormer is significantly better in performance than UNETR, swin UNETR and nnU-Net.
TABLE 1 splitting Performance of nnFormer and UNETR, swin UNETR, nnU-Net networks
The Semi-supervised iterative training is completed 2 times, the first iterative model is represented by Semi1, the second iterative model is represented by Semi2, and the Full-supervised model is represented by Full. The test sets of the three models are the same, and are all pulmonary vessel labeling data of 10 COPD patients. The evaluation index is the same as that of the second chapter of coarse blood vessel segmentation, namely Precision, dice, iou, sensitivity.
Table 2 shows the segmentation performance of the three models. Semi-supervised iterative training aims at improving the accuracy of segmentation effect and reducing the segmentation of false positive. It can be seen that Precision of Semi2 reaches 0.903, which is improved by nearly 0.2 compared with 0.8802 of Full, and by nearly 0.1 compared with Semi1, so as to achieve the expected effect. After the Precision index is raised, the Sensitivity is reduced to be normal. Since the pulmonary vessel structure is complex, the end diameter is too small and the diameter size is not uniform, the insufficient inclusion of gold standard in the segmentation result is less important for analysis index, and it is important that the segmentation result has as few non-vascular regions as possible. In addition, the Dice and IOU metrics of the three models are very close.
Table 2 Performance of full Supervisory model and 2 semi-Supervisory models
In addition, in the disclosed embodiment, the processing method further includes: acquiring an inhalation phase image, an exhalation phase image and a plurality of set threshold intervals to be diagnosed; the processing method is utilized to respectively conduct blood vessel segmentation on the inhalation phase image and the exhalation phase image, and corresponding pulmonary blood vessel images are obtained; determining a corresponding parameter response graph based on the inhalation phase image, the exhalation phase image and a plurality of set threshold intervals; and respectively utilizing the corresponding pulmonary vessel images to carry out pulmonary vessel and/or set airway elimination correction on the parameter response image, and carrying out COPD typing diagnosis of the chronic obstructive pulmonary disease.
In the disclosed embodiment, when the corresponding expiratory phase image of the inspiratory phase image to be diagnosed is absent or the corresponding inspiratory phase image of the gas phase image to be diagnosed is absent, synthesizing the inspiratory phase image into a corresponding synthetic expiratory phase image or synthesizing the expiratory phase image into a corresponding synthetic inspiratory phase image by using a preset synthesizer;
the processing method is utilized to respectively conduct blood vessel segmentation on the inspiratory phase image and the corresponding synthetic expiratory phase image so as to obtain a corresponding first pulmonary blood vessel image; or, respectively carrying out blood vessel segmentation on the exhalation phase image and the corresponding synthetic inhalation phase image by using the processing method to obtain a corresponding second pulmonary blood vessel image;
determining a first parameter response diagram based on the vapor phase image, the corresponding synthesized vapor phase image and a plurality of set threshold intervals; or determining a second parameter response diagram according to the breathing phase image, the corresponding synthetic breathing phase image and a plurality of set threshold intervals;
carrying out pulmonary vessel and/or set airway elimination correction on the first parameter response map by utilizing the corresponding first pulmonary vessel image to obtain a corrected first parameter response map; or, performing pulmonary vessel and/or set airway elimination correction on the second parameter response map by using the corresponding second pulmonary vessel image to obtain a corrected second parameter response map;
Based on the corrected first parameter response map, performing COPD typing diagnosis of chronic obstructive pulmonary disease; or, based on the corrected second parameter response map, performing COPD typing diagnosis of chronic obstructive pulmonary disease.
In the disclosed and other possible embodiments, the plurality of set threshold intervals are configured as attenuation values corresponding to the inhalation phase image or the exhalation phase image. For example, the inhalation phase image or the exhalation phase image may be configured as an inhalation phase CT image or an exhalation CT phase image. At this time, the plurality of set threshold intervals corresponding to the vapor phase CT image or the synthetic vapor phase CT image may be configured with one or more of greater than-950 HU and/or greater than-856 HU, respectively; the plurality of set threshold intervals corresponding to the expiratory CT phase image or the synthetic expiratory CT phase image may be configured with one or more of greater than-950 HU and/or greater than-856 HU, respectively. Meanwhile, it is also possible for those skilled in the art to configure the plurality of set threshold intervals according to the type (CT image, DR image, MRI image, ultrasound image, PET image, CT-PET image or other medical image) to which the inhalation phase image or exhalation phase image corresponds.
In the disclosed and other possible embodiments, the study was approved by the affiliated first hospital ethics committee of the university of guangzhou medical science, conforming to the ethics standard of the declaration of helsinki and its later revision in 1964, or similar ethics standards. All participants obtained informed consent. 558 pairs of inhaled and exhaled CT images were collected from affiliated first hospitals at university of medical science, guangzhou, from month 8 in 2017 to month 4 in 2021 to construct data set 1.
All participants performed lung function tests as directed by the american thoracic and european respiratory associations. For those FVCs that account for FEV 1 Less than 0.7 percent, an additional bronchodilatory test was performed within 20 minutes after administration of 180 grams of alamine. Participants were classified into five categories based on lung function test results: normally, GOLD1-4 represents mild to severe COPD. All participants performed chest CT scans at maximum inhalation and exhalation, respectively, resulting in a contrast vapor phase image and/or an exhalation phase image. And information on sex, age, smoking condition, etc. is recorded. Patient information and acquisition parameters are listed in table 1. The low dose CT scan used a Siemens device with a bulb voltage of 110kVp and a layer thickness of 1mm. CTDIvol of 2.21 mGy. DICOM data is collected and converted to 3DNIfTI image format. The matrix size of each layer of image of the inhalation phase image or the exhalation phase image is 512 multiplied by 512, and the slice number is 280-400.
In the disclosed and other possible embodiments, the set pre-processing threshold interval is acquired before the inhalation phase image and/or the exhalation phase image and the plurality of set threshold intervals are acquired; preprocessing the inhalation phase image and/or the exhalation phase image based on the preset preprocessing threshold interval, and then utilizing a preset airway segmentation model to segment the preprocessed inhalation phase image or the preprocessed exhalation phase image to obtain a corresponding inhalation phase airway image or exhalation phase airway image; synthesizing the preprocessed inspiratory phase image into a corresponding synthesized expiratory phase image or synthesizing the preprocessed expiratory phase image into a corresponding synthesized inspiratory phase image by using a preset synthesizer; determining a first parameter response diagram according to the preprocessed inspiratory phase image, the corresponding synthesized expiratory phase image and a plurality of set threshold intervals; or determining a second parameter response graph according to the preprocessed expiratory phase image, the corresponding synthetic inspiratory phase image and a plurality of set threshold intervals; performing pulmonary vessel and/or set airway elimination correction on the first parameter response map by using the preprocessed inhalation phase airway image or exhalation phase airway image, so as to determine a COPD phenotype; or, performing pulmonary vessel and/or set airway elimination correction on the second parameter response map by using the preprocessed inhalation phase airway image or exhalation phase airway image, so as to determine the COPD phenotype.
For example, in the disclosed and other possible embodiments, the set preprocessing threshold interval may be configured to be [ -1000HU,0HU ], the Hounsfield Unit (HU) value is limited to be within the [ -1000HU,0HU ], the value of-1000 HU, where the pixel value in the inspiratory phase image and/or the expiratory phase image is smaller than the lower limit value of the preprocessing threshold interval, is set to-1000 HU, and the value of 0HU, which is larger than the upper limit value of the preprocessing threshold interval, is set to 0HU, resulting in the preprocessed inspiratory phase image and/or expiratory phase image.
In addition, for the inhalation phase image or the exhalation phase image, a 2D image or a 3D image may be configured, and the 3D image may be converted into a 2DDICOM slice. Of these, 558 samples were randomly split into training and testing sets, with 449 samples containing slices of 158,455 layers total thickness for model training and 109 samples containing 38,167 DICOM slices for testing the feasibility of the model.
Meanwhile, to verify the applicability of our model, we collected data for 62 external verification cases (dataset 2). External validation data were collected from three hospitals, respectively, using different acquisition devices, and were all conventional dose CT.
In the disclosed embodiment, the respiratory phase image or the respiratory phase image is segmented by using a preset airway segmentation model, so as to obtain a corresponding respiratory phase airway image or respiratory phase airway image.
In the disclosed and other possible embodiments, the airway may be extracted from the fixed images (inhalation phase images or exhalation phase images) used in the registration process, as the airway may have an effect on the results generated by the PRM. The preset airway segmentation model can be used for airway segmentation by adopting an existing U-Net model based on a two-stage 3D context converter, which is proposed by the team, and the model is trained by using CT images. The U-Net model of the two-stage 3D context converter comprises two stages: initial airway segmentation and fine airway segmentation, the two-stage models share the same sub-network and use different airway masks as inputs. And using the model to divide the air channel of the inhalation phase image or the exhalation phase image to obtain a corresponding inhalation phase air channel image or exhalation phase air channel image (air channel tree image).
In the disclosed and other possible embodiments, the preset airway segmentation model may also be configured as a preset airway segmentation model based on a U-Net convolutional neural network, nnU-Net convolutional neural network, or a modification thereof.
In the disclosed embodiments, the inhalation phase images are synthesized into corresponding synthesized inhalation phase images or the exhalation phase images are synthesized into corresponding synthesized inhalation phase images using a preset synthesizer.
In a disclosed embodiment, the method for synthesizing the inspiratory phase image into a corresponding synthesized expiratory phase image by using a preset synthesizer includes: training the preset synthesizer by utilizing the inhalation phase image for training and the corresponding exhalation phase image thereof; and synthesizing the inhalation phase image into a corresponding synthesized inhalation phase image based on the trained preset synthesizer.
In the disclosed embodiment, the method for training the preset synthesizer by using the inhalation phase image for training and the corresponding exhalation phase image thereof includes: using a first synthesizer G of the preset synthesizers I Synthesizing the inspiratory phase image for training into a corresponding first synthesized expiratory phase image; using a second synthesizer G of the preset synthesizers E Converting the first synthetic exhalation phase image into a first synthetic inhalation phase image; calculating a loss of cyclic consistency between the inspiratory phase image for training and the first synthetic inspiratory phase image; the first synthetic inhalation phase image and the inhalation phase image used for training are respectively subjected to convolution processing to obtain a corresponding inhalation phase characteristic image and a first synthetic inhalation phase characteristic image; calculating a perceived loss between the inspiratory phase feature map and the first synthetic inspiratory phase feature map; and uses the first preset discriminator D I Determining that the first synthesized breathing phase image is a real image or a synthesized image; and based on the first preset discriminator D I Calculating a countering loss; adjusting a first synthesizer G in the preset synthesizers based on the cyclical uniformity loss, the perceived loss and the countering loss I A second synthesizer G E And (3) completing the training of the preset synthesizer.
In a disclosed embodiment, the method for synthesizing the inspiratory phase image into a corresponding synthesized expiratory phase image based on the trained preset synthesizer includes: acquiring a first synthesizer G in the preset synthesizers after training I The method comprises the steps of carrying out a first treatment on the surface of the Based on a first synthesizer G of the trained preset synthesizers I Convolving the inspiratory phase imageAnd synthesizing the inhalation phase image into a corresponding synthesis exhalation phase image.
Also, in the disclosed embodiment, the method for synthesizing the expiratory phase image into a corresponding synthetic inspiratory phase image using a preset synthesizer includes: training the preset synthesizer by using the expiratory phase image for training and the inspiratory phase image corresponding to the expiratory phase image; and synthesizing the expiratory phase image into a corresponding synthetic expiratory phase image based on the trained preset synthesizer.
In a disclosed embodiment, the method for training the preset synthesizer by using the expiratory phase image for training and the inspiratory phase image corresponding to the expiratory phase image for training includes: using a second synthesizer G of the preset synthesizers E Synthesizing the expiratory phase image for training into a corresponding first synthetic inspiratory phase image; using a first synthesizer G of the preset synthesizers I Converting the first synthetic inspiratory phase image to a first synthetic expiratory phase image; calculating a loss of cyclical consistency between the expiratory phase image for training and the first synthetic expiratory phase image; the expiratory phase image for training and the first synthetic expiratory phase image are subjected to convolution processing respectively, so that a corresponding expiratory phase characteristic diagram and a first synthetic expiratory phase characteristic diagram are obtained; calculating a perceived loss between the expiratory phase signature and the first synthetic expiratory phase signature; and utilizes a second preset discriminator D E Determining that the first synthesized breathing phase image is a real image or a synthesized image; and based on the second preset discriminator D E Calculating a countering loss; adjusting a first synthesizer G in the preset synthesizers based on the cyclical uniformity loss, the perceived loss and the countering loss I A second synthesizer G E And (3) completing the training of the preset synthesizer.
In a disclosed embodiment, the method for synthesizing the expiratory phase image into a corresponding synthetic inspiratory phase image based on the trained preset synthesizer includes: acquiring a second synthesis in the preset synthesizer after trainingG device E The method comprises the steps of carrying out a first treatment on the surface of the Based on a second synthesizer G of the trained preset synthesizers E And carrying out convolution processing on the expiratory phase image, and synthesizing the expiratory phase image into a corresponding synthesized expiratory phase image.
Fig. 4 illustrates a flowchart of a method of synthesizing the inspiratory phase images into corresponding synthesized expiratory phase images or synthesizing the expiratory phase images into corresponding synthesized inspiratory phase images using a preset synthesizer according to an embodiment of the present disclosure. As shown in fig. 4, in the disclosed embodiment and other possible embodiments, an image synthesizer of visual loss of base CycleGAN is proposed, named PCycleGAN (fig. 2). PCycleGAN is based on two generators and two discriminators G I (first synthesizer or generator), G E (second synthesizer or generator), D I (first preset discriminator) and D E (second preset discriminator). Wherein I represents a breath phase image, E represents a breath phase image; wherein G is I Learning a mapping from an inspiratory phase CT to an expiratory phase CT, while a second generator G E The mapping from the expiratory phase CT to the inspiratory phase CT is learned. First discriminator D I And a second discriminator D E Is responsible for determining whether an image is a true inspiratory phase or a true expiratory phase CT image. Second discriminator D E Attempting to distinguish the first synthesizer G I (I) Whether it is a true expiratory phase CT. Here, embodiments of the present disclosure introduce a loss of resistance. Wherein the resistance loss is intended to cause the generator to generate or synthesize a CT image (synthetic respiratory phase image or synthetic respiratory phase image) of high quality to fool the discriminator. Wherein the first synthesizer G I Generating a pseudo-exhalation CT image (synthetic exhalation phase image) G I (I) Then input it to the second generator G E To generate a pseudo-inhalation CT image (synthetic inhalation phase image) G E (G I (I) The image is similar to true vapor phase CT. Wherein a cyclical consistency loss is calculated to ensure that the generated image can be reliably recovered from the source image. To obtain a perceived loss, the vapor-phase images I and G can be taken E (G I (I) Input into a convolutional model (e.g., VGG network or other convolutional neural network model, etc.), and calculate Gao WeiteDifferences between the signs. This procedure also applies in the opposite direction (lower part of fig. 2).
In the disclosed and other possible embodiments, the key to the success of the GANs is a loss of resistance that makes the generated photograph indistinguishable from the actual target image. The lost function is as follows:
L GAN (G,D Y ,X,Y)=E Y [logD Y (y)]+E X [log(1-D Y (G(x)))] (1)
where E represents the expected value. It can be seen that D tries to maximize the resistance loss, while G tries to minimize it. To further increase the stability of training, we replace the negative log-likelihood cost of the resistance loss with the square loss function (equation 2). Wherein X represents an inhalation phase CT image or a synthetic inhalation phase CT image I, and Y represents an exhalation phase CT image or a synthetic exhalation phase CT image.
L GAN (D,G,X,Y)=-E Y [(D(y)-1) 2 ]-E X [D(G(x) 2 ] (2)
In the disclosed and other possible embodiments, although the network may map the same input image to any image in the target area, the resistance loss cannot guarantee a single input x i Can be accurately mapped to y i . Since there are no paired images in training, an important loss function (equation 3) is introduced in the CycleGAN framework to ensure consistency between the input and output images: cycle consistency loss.
L cyc (G X ,G Y )=E X [||G Y (G X (x))-x|| 1 ]+E Y [||G X (G Y (y))-y|| 1 ] (3)
In the disclosed and other possible embodiments, the CycleGAN also introduces an additional loss function (recognition loss function, equation 4) to ensure that the generator does not change the color of the input image arbitrarily and retains its useful features by taking a real sample in the target domain as input to the generator.
L idt (G X ,G Y )=E X [||G Y (x)-x|| 1 ]+E Y [||G X (y)-y|| 1 ] (4)
In the formulas (3) and (4), I 1 Representing a 1-norm.
Introducing a loss function L corresponding to the recognition loss or the perception loss idt To ensure that there are no unwanted changes between the inhalation and exhalation CT images and to ensure that there are no changes between the inhalation and exhalation CT images (between the inhalation phase images and the corresponding synthesis exhalation phase images, between the synthesis of the exhalation phase images and the corresponding synthesis inhalation phase images).
In the disclosed embodiments, as well as other possible embodiments, introducing a perceptual loss function may effectively improve the visual quality and texture characteristics of the generated image. The perceptual loss function extracts high-dimensional information from the image to display high-dimensional feature differences between the real and output images, as compared to the loss function that compares differences between pixels. It can be assumed that there are flow field features in the lung CT that cannot be obtained by pixel level loss functions. The perceptual loss function more effectively demonstrates the differentiation and trend between different volumes. These representations are typically obtained by inputting the images into a pre-trained VGG network. After multiple convolutions and other manipulation processes, the image size is reduced while the dimension is increased, thereby representing new features. The formula is as follows:
L perc (G X ,G Y ,X)=E X,Y [||V(G Y (G X (x)))-V(x)|| 1 ] (5)
In the formula (5), V represents a convolution model, for example, a VGG network model or other convolution neural network model is used as the convolution model.
Based on formulas (2), (3), (4) and (5), the model of the embodiments of the present disclosure was named PCycleGAN and its loss function was defined as follows:
L PCycleGAN =L GAN (D Y ,G X ,X,Y)+L GAN (D X ,G Y ,Y,X)+λL CYC (G X ,G Y )+L idt (G X ,G Y )+βL perc (G X ,G Y ,X)+βL perc (G Y ,G x ,Y) (6)
in equation (6), the first coefficient λ=0.2 and the second coefficient β=1. In addition, the first coefficient and the second coefficient can be configured by those skilled in the art according to actual needs.
Fig. 5 illustrates a corresponding network configuration diagram of a preset synthesizer (PCycleGAN) according to an embodiment of the present disclosure; as shown in fig. 5, the preset synthesizer includes: two generators and two discriminants G I (first synthesizer or generator), G E (second synthesizer or generator), D I (first preset discriminator) and D E (a second preset discriminator); wherein fig. 5 (a) represents a first synthesizer or generator and/or a second synthesizer or generator; FIG. 5 (b) shows D I (first preset discriminator) and/or D E (a second preset discriminator); fig. 5 (c) shows a convolution model (e.g., VGG 16). In addition, the preset synthesizer may be configured as a CycleGAN model, a Pix2Pix model, a ResVit model, a CUT model, and the like.
In fig. 5, a real image represents the inhalation phase image or the exhalation phase image; the synthetic image is expressed as a breathing phase image or a synthetic inhalation phase image; the feature map represents that the inspiratory phase image for training and the first synthesized inspiratory phase image are subjected to convolution processing to obtain a corresponding inspiratory phase feature map and a first synthesized inspiratory phase feature map, or the expiratory phase image for training and the first synthesized expiratory phase image are respectively subjected to convolution processing to obtain a corresponding expiratory phase feature map and a first synthesized expiratory phase feature map.
In fig. 5, k in the convolution Cov and deconvolution (transpose convolution) transferred Cov represents the number of convolution kernels, and s represents the convolution step. The activation function teaky ReLU may be configured as other existing activation functions, such as one or more of ReLU/teakyrelu/prilu/ELU or tanh, as required. Wherein, the person skilled in the art selects and configures the batch standardized back Norm and pooling; for example, batch normalized Bath Nor may be used or not used, while Max Pooling may be employed. Wherein the number of each module can be selected by the person skilled in the art; for example, x 6 indicates that the number of modules is configured to be 6.
In fig. 5, in PCycleGAN, U-Net can be used as the backbone network of the generator. The network codec structure used is shown in fig. 5 (a), where each pair of symmetric connections (jump connections) from the encoder occur in the corresponding layer of the decoder. The input is a 512 x 512 2d array of corresponding inhalation or exhalation phase images, with C channels. Where C can be set to 1, which means that the input is a single channel array. Wherein the encoder consists of one 4 x 4 convolutional layer with a step size of 2, followed by six groups of the LeakyReLU layer, a combination of the 4 x 4 convolutional layer with a step size of 2 and the normalization layer, and a combination of a group of the 4 x 4 convolutional layer with a step size of 2 and the LeakyReLU layer. The decoder consists of seven groups of ReLU layers, a combination of a 4 x 4 transpose convolutional layer with a step size of 2 and a normalization layer, and a combination of a 4 x 4 up convolutional layer with a step size of 2 and a Tanh activation function. During the downsampling process, the feature map size is halved and the number of channels is increased after each module.
In embodiments of the present disclosure, as well as other possible embodiments, a markov PatchGAN arbiter may be employed that distinguishes between real and false images and provides counterfeedback. PatchGAN attempts to classify an image patch. Fig. 5 (b) shows a PatchGAN with five convolutional layers, where the input channel can be configured to 1 and the output channel can be configured to 1. For an input image, it generates a patch, each patch point corresponding to an area in the input image. Using PatchGAN as a arbiter has advantages in generating image details and capturing features over the result of PixelGAN, the number of parameters can be reduced by classifying each pixel, and training is easier.
To account for perceived losses, both the generator and the real image may be input into the VGG network to extract their high-dimensional feature maps, and then calculate the losses. Further, a VGG-16 model pre-trained on the ImageNet dataset can be used, which model is typically used for classification tasks. The network structure is shown in fig. 5 (c). The present disclosure uses, among other things, the portion of the VGG-16 model preceding the second pooling layer, including the first 9 convolutional layers, each consisting of two 3 x 3 convolutional layers and a ReLU layer, followed by the largest pooling layer, and then two 3 x 3 convolutional layers and a ReLU layer. Wherein the input image has 1 channel and the output image has 128 channels.
In the disclosed embodiment, a first parameter response graph is determined according to the inhalation phase image, the corresponding synthesized exhalation phase image and a plurality of set threshold intervals; or determining a second parameter response chart according to the breathing phase image, the corresponding synthetic breathing phase image and a plurality of set threshold intervals.
In a disclosed embodiment, the method for determining a first parameter response chart according to the inhalation phase image, the synthesis exhalation phase image and the plurality of set threshold intervals includes: registering the respiratory phase image and the corresponding synthetic respiratory phase image to obtain a corresponding first registration image (pair); a first parametric response map (first PRM) is determined based on the first registered image (pair) and the plurality of set threshold intervals.
In a disclosed embodiment, the method for determining a second parameter response chart according to the respiratory phase image, the corresponding synthetic respiratory phase image and a plurality of set threshold intervals includes: registering the respiratory phase image and the corresponding synthetic respiratory phase image to obtain a corresponding second registration image (pair); a second parametric response map (second PRM) is determined based on the second registered image (pair) and the plurality of set threshold intervals.
In the disclosed and other possible embodiments, the method of determining a first parameter response map (first PRM) based on the first registered image and the plurality of set threshold intervals includes: and respectively carrying out double-threshold operation on the inhalation phase image and the corresponding synthesized exhalation phase image (first registration image) based on a plurality of set threshold intervals, and determining a first parameter response chart (first PRM). Meanwhile, the method for determining a second parameter response map (second PRM) based on the second registration image (pair) and the plurality of set threshold intervals includes: and respectively carrying out double-threshold operation on the breathing phase image and the corresponding synthesized breathing phase image (second registration image) based on a plurality of set threshold intervals, and determining a second parameter response chart (second PRM).
For example, by setting a plurality of different threshold intervals for the inspiratory and expiratory phase CT images, and based on the two registered images (the inspiratory phase image and the corresponding synthetic expiratory phase image; or the expiratory phase image and the corresponding synthetic inspiratory phase image), each position in the two registered images can be classified into four categories, thereby determining a first parameter response map (first PRM) and/or a second parameter response map (second PRM). Each category corresponds to a phenotype of COPD, being an emphysema (emphysema) region, a functional small to lesions (fSAD) region, a no-feature region, and a normal region (emphysema and fSAD regions), respectively; among them, an emphysema (emphysema) region, a functional small qi to lesions (fSAD) region, and a normal region are represented by red, yellow, and green, respectively. Green represents normal, with attenuation values in the inhaled or synthetic inhaled CT images greater than-950 HU, and voxel values (attenuation values) in the exhaled or synthetic exhaled CT images greater than-856 HU. Red represents emphysema, the attenuation value in the inhalation phase CT image or the synthesis inhalation phase CT image is smaller than-950 HU, and the attenuation value in the exhalation phase CT image or the synthesis exhalation phase CT image is smaller than-856 HU. Yellow represents the fSAD region with attenuation values greater than-950 HU in the inhaled or synthetic inhaled CT images and attenuation values less than-856 HU in the exhaled or synthetic exhaled CT images. Furthermore, the proportion value of each phenotype can be obtained by calculating the proportion of the number of voxels of each category to the total number of voxels corresponding to the lung fields (left lung and/or right lung), so as to quantitatively analyze COPD.
In the disclosed embodiment, a first parameter response graph is determined in the plurality of set threshold intervals according to the inhalation phase image and the corresponding synthetic exhalation phase image; or, before determining a second parameter response graph according to the respiratory phase image, the corresponding synthetic respiratory phase image and a plurality of set threshold intervals, performing lung field segmentation on the respiratory phase image or the respiratory phase image by using a lung field segmentation model to obtain a corresponding respiratory phase lung field image, a corresponding synthetic respiratory phase lung field image, a corresponding respiratory phase lung field image or a corresponding respiratory phase lung field image; further, a first parameter response map is determined according to the inhalation phase lung field image, the corresponding synthesized exhalation lung field image and a plurality of set threshold intervals; or determining a second parameter response graph according to the expiratory phase lung field image, the corresponding synthetic inspiratory phase lung field image and a plurality of set threshold intervals.
In the disclosed and other possible embodiments, the method of determining a first parameter response map (first PRM) based on the first registered image and the plurality of set threshold intervals further includes: performing lung field segmentation on the inspiratory phase image or the synthesized expiratory phase image by using a lung field segmentation model to obtain a first lung field image; and respectively performing double-threshold operation on the inhalation phase image and the corresponding synthetic exhalation phase image (first registration image) in the first lung field image based on a plurality of set threshold intervals, and determining a first parameter response chart (first PRM) corresponding to the first lung field image.
Meanwhile, in the disclosed embodiment and other possible embodiments, the method for determining a second parameter response map (second PRM) based on the second registered image and the plurality of set threshold intervals further includes: performing lung field segmentation on the expiratory phase image or the synthetic inspiratory phase image by using a lung field segmentation model to obtain a second lung field image; and respectively performing double-threshold operation on the respiratory phase image and the corresponding synthetic respiratory phase image (second registration image) in the second lung field image based on a plurality of set threshold intervals, and determining a second parameter response chart (second PRM) corresponding to the second lung field image.
In the disclosed embodiments and other possible embodiments, the performing lung field segmentation on the inspiratory phase image or the synthesized expiratory phase image by using a lung field segmentation model to obtain a first lung field image, or performing lung field segmentation on the expiratory phase image or the synthesized inspiratory phase image by using a lung field segmentation model to obtain a second lung field image includes: lung segmentation (lung field segmentation) and labeling operations. The lung segmentation is to exclude the influence of external factors on the generated image effect and to facilitate the subsequent registration process. Hofmanninger et al propose a model for segmenting the lung region and obtaining a lung marker, using which a marker operation is performed to extract the lung region (lung field region) from an original image (the inspiratory phase image or the synthetic expiratory phase image or the synthetic inspiratory phase image).
In the disclosed and other possible embodiments, the lung field segmentation model can also be configured as a lung field segmentation model based on a U-Net convolutional neural network, nnU-Net convolutional neural network, or a modification thereof.
In the disclosed and other possible embodiments, the registration process is performed using an Elastix tool. Registration mainly comprises two steps, the first being an affine transformation allowing translation, rotation, scaling and tilting of the image to be registered (floating image) and the second being a non-rigid B-spline transformation. B-spline transformations are modeled as weighted sum B-spline basis functions, which are placed on a uniform control point grid; at the same time, B-spline basis functions have local support, which is beneficial for fast computation. Wherein the image to be registered (moving image) can be configured as a synthesized vapor phase image or a synthesized vapor phase image, and the fixed image is configured as a vapor phase image or a vapor phase image. Likewise, the image to be registered (floating image) may be configured as an inhalation phase image or an exhalation phase image, while the fixed image is configured as a synthetic exhalation phase image or a synthetic inhalation phase image.
In the disclosed embodiments and other possible embodiments, other existing registration algorithms, such as SIFT registration algorithms, may also be selected by those skilled in the art to perform registration of the respiratory phase image and the corresponding synthetic respiratory phase image or to perform registration of the respiratory phase image and the corresponding synthetic respiratory phase image.
In the disclosed embodiment, in the performing of the set airway elimination correction, the method for determining the set airway includes: and determining the corresponding tube wall diameter of the airway affecting the COPD, and determining the set airway based on the tube wall diameter.
In the disclosed and other possible embodiments, the tube wall diameter may be configured to be 2mm, and an airway greater than the tube wall diameter of 2mm is configured as the set airway. Also, the tube wall diameter may be configured by those skilled in the art based on actual practice.
In a disclosed embodiment, the method of determining the set airway based on the tube wall diameter comprises: measuring the diameter of the airway wall in the inhalation phase airway image or the exhalation phase airway image respectively; and determining the airway corresponding to the airway wall diameter which is larger than or equal to the airway wall diameter as the set airway.
The main execution body of the lung image processing method may be a lung image processing apparatus, for example, the lung image processing method may be executed by a terminal device or a server or other processing device, wherein the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital processing (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. In some possible implementations, the lung image processing method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
It will be appreciated by those skilled in the art that in the above lung image processing method of the specific embodiment, the written order of steps is not meant to imply a strict order of execution and should not be construed as limiting the implementation, but rather should be determined by the function and possible inherent logic of each step.
Meanwhile, the present disclosure also proposes a lung image processing apparatus, including: a first processing unit for training the first segmentation model using a first number of first lung images and their corresponding lung vessel label images; performing lung vessel segmentation on a second number of second lung images based on the trained first segmentation model to obtain corresponding first lung vessel segmentation images; the second processing unit is used for selecting the corresponding first pulmonary blood vessel segmentation image to obtain a selected first pulmonary blood vessel segmentation image; training a second segmentation model by using the first number of first lung images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung images; and performing pulmonary vessel segmentation on the second lung images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images.
In a disclosed embodiment, the lung image processing apparatus further comprises: the third processing unit is used for acquiring an inhalation phase image, an exhalation phase image and a plurality of set threshold intervals to be diagnosed; the processing method is utilized to respectively conduct blood vessel segmentation on the inhalation phase image and the exhalation phase image, and corresponding pulmonary blood vessel images are obtained; determining a corresponding parameter response graph based on the inhalation phase image, the exhalation phase image and a plurality of set threshold intervals; and respectively utilizing the corresponding pulmonary vessel images to carry out pulmonary vessel and/or set airway elimination correction on the parameter response image, and carrying out COPD typing diagnosis of the chronic obstructive pulmonary disease.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementation of the method may refer to the description of the foregoing lung image processing method embodiments, which is not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described lung image processing method. The computer readable storage medium may be a non-volatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above-described lung image processing method. Wherein the electronic device may be provided as a terminal, server or other modality of device.
Fig. 6 is a block diagram of an electronic device 800, according to an example embodiment. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 7 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, electronic device 1900 may be provided as a server. Referring to FIG. 7, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method of processing a lung image, comprising:
training a first segmentation model by using a first number of first lung images and corresponding lung vessel label images; performing lung vessel segmentation on a second number of second lung images based on the trained first segmentation model to obtain corresponding first lung vessel segmentation images;
selecting the corresponding first lung vessel segmentation image to obtain a selected first lung vessel segmentation image; training a second segmentation model by using the first number of first lung images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung images; and performing pulmonary vessel segmentation on the second lung images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images.
2. The method of processing according to claim 1, further comprising: acquiring performance indexes of the second segmentation model;
if the performance index is lower than the set performance index, selecting the corresponding second pulmonary blood vessel segmentation image to obtain a selected second pulmonary blood vessel segmentation image;
training a third segmentation model by using the first number of first lung images and corresponding lung vessel label images, the selected second lung vessel segmentation images and corresponding second lung images;
performing pulmonary vessel segmentation on the second lung images with the fourth number remaining after the selection based on the trained third segmentation model to obtain corresponding second pulmonary vessel segmentation images;
repeating the above process until the performance index of the final segmentation model is higher than or equal to the set performance index; and/or the number of the groups of groups,
the method for selecting the corresponding first pulmonary blood vessel segmentation image to obtain the selected first pulmonary blood vessel segmentation image comprises the following steps: .
Obtaining a segmentation index corresponding to each first pulmonary vessel segmentation image and a first set segmentation index;
selecting the corresponding first pulmonary blood vessel segmentation image based on the segmentation index corresponding to each first pulmonary blood vessel segmentation image and the first set segmentation index to obtain a selected first pulmonary blood vessel segmentation image; and/or the number of the groups of groups,
The method for selecting the corresponding first pulmonary blood vessel segmentation image based on the segmentation index corresponding to each first pulmonary blood vessel segmentation image and the first set segmentation index to obtain the selected first pulmonary blood vessel segmentation image comprises the following steps:
if the segmentation index corresponding to the first pulmonary blood vessel segmentation image is greater than or equal to the first set segmentation index, determining the first pulmonary blood vessel segmentation image as a selected first pulmonary blood vessel segmentation image; and/or the number of the groups of groups,
the method for selecting the corresponding second pulmonary blood vessel segmentation image to obtain the selected second pulmonary blood vessel segmentation image comprises the following steps:
obtaining a segmentation index corresponding to each second pulmonary vessel segmentation image and a second set segmentation index;
selecting the corresponding second pulmonary blood vessel segmentation image based on the segmentation index corresponding to each second pulmonary blood vessel segmentation image and the second set segmentation index to obtain a selected second pulmonary blood vessel segmentation image; and/or the number of the groups of groups,
the method for selecting the corresponding second pulmonary blood vessel segmentation image based on the segmentation index corresponding to each second pulmonary blood vessel segmentation image and the second set segmentation index to obtain the selected second pulmonary blood vessel segmentation image comprises the following steps:
And if the segmentation index corresponding to the second pulmonary blood vessel segmentation image is greater than or equal to the second set segmentation index, determining the second pulmonary blood vessel segmentation image as the selected second pulmonary blood vessel segmentation image.
3. The method of processing according to any one of claims 1-2, wherein determining a pulmonary vessel label image corresponding to the first number of first lung images comprises:
respectively carrying out first-time pulmonary vessel segmentation on the first number of first pulmonary images by using a trained preset pulmonary vessel convolution segmentation model to obtain corresponding first pulmonary vessel label images;
performing second lung vessel segmentation on the first number of first lung images by using a machine learning segmentation model to obtain corresponding second lung vessel label images; wherein the caliber of the pulmonary blood vessel in the second pulmonary blood vessel label image is smaller than that in the first pulmonary blood vessel label image;
respectively fusing the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain pulmonary blood vessel label images corresponding to the first number of first pulmonary images; and/or the number of the groups of groups,
the method for respectively fusing the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain the pulmonary blood vessel label images corresponding to the first number of first pulmonary images comprises the following steps:
Respectively carrying out position superposition on the first pulmonary blood vessel label image and the second pulmonary blood vessel label image to obtain pulmonary blood vessel label images corresponding to the first number of first pulmonary images; and/or the number of the groups of groups,
the method for obtaining the corresponding second pulmonary vessel label image by using the machine learning segmentation model to respectively carry out the second pulmonary vessel segmentation on the first number of first pulmonary images comprises the following steps:
respectively carrying out multi-scale representation on the first lung images of the first number to obtain corresponding multi-scale lung images;
and respectively extracting the characteristics of the plurality of lung images, and classifying the extracted characteristics by using a preset classifier to obtain a corresponding second lung vessel label image.
4. A method according to any of claims 1-3, characterized in that during training of the segmentation model, a loss of the segmentation model is calculated, and the loss is used to adjust the network parameters of the segmentation model; and/or the number of the groups of groups,
the method for calculating the loss of the segmentation model comprises the following steps: obtaining a Dice loss function and a cross entropy loss function;
calculating a first loss value and a second loss value of the Dice loss function and the cross entropy loss function respectively;
Configuring the sum of the first loss value and the second loss value as a loss value of the segmentation model; and/or the number of the groups of groups,
the method for calculating the loss function of the segmentation model further comprises the following steps: setting the Dice loss function and the loss adjustment of the cross entropy loss function;
determining, based on the loss adjustment, whether the segmentation model computes a loss by selecting the Dice loss function and the cross entropy loss function; and/or the number of the groups of groups,
the method for setting the Dice loss function and the loss adjustment of the cross entropy loss function comprises the following steps:
calculating a plurality of difference values of segmentation pixel points between a lung blood vessel segmentation image corresponding to a lung blood vessel segmentation image to be detected and a lung blood vessel label image corresponding to the lung blood vessel segmentation image output by the segmentation model; calculating the average value among the plurality of difference values;
if the mean value is smaller than a set value, the segmentation model does not calculate loss; otherwise, the segmentation model calculates a loss by selecting the Dice loss function and the cross entropy loss function.
5. The method of any of claims 1-4, further comprising, prior to said training the first segmentation model with the first number of first lung images and their corresponding pulmonary vessel label images: acquiring a first number of first lung images, a corresponding lung vessel label image and a second number of second lung images; and/or the number of the groups of groups,
Before the training of the first segmentation model using the first number of first lung images and their corresponding lung vessel label images, further comprising: performing lung field segmentation on the first number of first lung images and the second number of second lung images respectively to obtain corresponding first lung field images and second lung field images;
further, training the first segmentation model by using a first number of first lung field images and corresponding lung vessel label images; performing pulmonary vessel segmentation on a second number of second lung field images based on the trained first segmentation model to obtain corresponding first pulmonary vessel segmentation images;
selecting the corresponding first lung vessel segmentation image to obtain a selected first lung vessel segmentation image; training a second segmentation model by using the first number of first lung field images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung field images; and performing pulmonary vessel segmentation on the second lung field images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images.
6. The process of any one of claims 1-5, further comprising:
Acquiring an inhalation phase image, an exhalation phase image and a plurality of set threshold intervals to be diagnosed;
the processing method is utilized to respectively conduct blood vessel segmentation on the inhalation phase image and the exhalation phase image, and corresponding pulmonary blood vessel images are obtained;
determining a corresponding parameter response graph based on the inhalation phase image, the exhalation phase image and a plurality of set threshold intervals;
and respectively utilizing the corresponding pulmonary vessel images to carry out pulmonary vessel and/or set airway elimination correction on the parameter response image, and carrying out COPD typing diagnosis of the chronic obstructive pulmonary disease.
7. The processing method according to claim 6, wherein when the inhalation phase image corresponding to the inhalation phase image to be diagnosed is absent or the inhalation phase image corresponding to the gas phase image to be diagnosed is absent, the inhalation phase image is synthesized into a corresponding synthetic inhalation phase image or the exhalation phase image is synthesized into a corresponding synthetic inhalation phase image using a preset synthesizer;
the processing method is utilized to respectively conduct blood vessel segmentation on the inspiratory phase image and the corresponding synthetic expiratory phase image so as to obtain a corresponding first pulmonary blood vessel image; or, respectively carrying out blood vessel segmentation on the exhalation phase image and the corresponding synthetic inhalation phase image by using the processing method to obtain a corresponding second pulmonary blood vessel image;
Determining a first parameter response diagram based on the vapor phase image, the corresponding synthesized vapor phase image and a plurality of set threshold intervals; or determining a second parameter response diagram according to the breathing phase image, the corresponding synthetic breathing phase image and a plurality of set threshold intervals;
carrying out pulmonary vessel and/or set airway elimination correction on the first parameter response map by utilizing the corresponding first pulmonary vessel image to obtain a corrected first parameter response map; or, performing pulmonary vessel and/or set airway elimination correction on the second parameter response map by using the corresponding second pulmonary vessel image to obtain a corrected second parameter response map;
based on the corrected first parameter response map, performing COPD typing diagnosis of chronic obstructive pulmonary disease; or, based on the corrected second parameter response map, performing COPD typing diagnosis of chronic obstructive pulmonary disease.
8. A lung image processing apparatus, comprising:
a first processing unit for training the first segmentation model using a first number of first lung images and their corresponding lung vessel label images; performing lung vessel segmentation on a second number of second lung images based on the trained first segmentation model to obtain corresponding first lung vessel segmentation images;
The second processing unit is used for selecting the corresponding first pulmonary blood vessel segmentation image to obtain a selected first pulmonary blood vessel segmentation image; training a second segmentation model by using the first number of first lung images and the corresponding lung vessel label images, the selected first lung vessel segmentation images and the corresponding second lung images; and performing pulmonary vessel segmentation on the second lung images with the third number remaining after the selection based on the trained second segmentation model to obtain corresponding second pulmonary vessel segmentation images.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the lung image processing method of any of claims 1 to 7.
10. A computer-readable storage medium, on which computer program instructions are stored, characterized in that the computer program instructions, when executed by a processor, implement the lung image processing method according to any of claims 1 to 7.
CN202310941660.9A 2023-07-28 2023-07-28 Lung image processing method and device, electronic equipment and storage medium Pending CN117218133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310941660.9A CN117218133A (en) 2023-07-28 2023-07-28 Lung image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310941660.9A CN117218133A (en) 2023-07-28 2023-07-28 Lung image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117218133A true CN117218133A (en) 2023-12-12

Family

ID=89046936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310941660.9A Pending CN117218133A (en) 2023-07-28 2023-07-28 Lung image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117218133A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593322A (en) * 2024-01-19 2024-02-23 吉林大学第一医院 Target area automatic sketching method and device, electronic equipment and readable storage medium
CN117746267A (en) * 2023-12-14 2024-03-22 广西环保产业投资集团有限公司 Crown extraction method, device and medium based on semi-supervised active learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746267A (en) * 2023-12-14 2024-03-22 广西环保产业投资集团有限公司 Crown extraction method, device and medium based on semi-supervised active learning
CN117593322A (en) * 2024-01-19 2024-02-23 吉林大学第一医院 Target area automatic sketching method and device, electronic equipment and readable storage medium
CN117593322B (en) * 2024-01-19 2024-04-09 吉林大学第一医院 Target area automatic sketching method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US11443428B2 (en) Systems and methods for probablistic segmentation in anatomical image processing
CN107492099B (en) Medical image analysis method, medical image analysis system, and storage medium
CN112767329B (en) Image processing method and device and electronic equipment
EP3477589B1 (en) Method of processing medical image, and medical image processing apparatus performing the method
CN117218133A (en) Lung image processing method and device, electronic equipment and storage medium
US10667776B2 (en) Classifying views of an angiographic medical imaging system
KR20210107667A (en) Image segmentation method and apparatus, electronic device and storage medium
US20220262105A1 (en) Systems, methods, and apparatuses for the generation of source models for transfer learning to application specific models used in the processing of medical imaging
CN112541928A (en) Network training method and device, image segmentation method and device and electronic equipment
CN114820584B (en) Lung focus positioner
WO2021259391A2 (en) Image processing method and apparatus, and electronic device and storage medium
EP3973508A1 (en) Sampling latent variables to generate multiple segmentations of an image
CN115423819A (en) Method and device for segmenting pulmonary blood vessels, electronic device and storage medium
CN116797554A (en) Image processing method and device
CN117152442B (en) Automatic image target area sketching method and device, electronic equipment and readable storage medium
Velichko et al. A Comprehensive Review of Deep Learning Approaches for Magnetic Resonance Imaging Liver Tumor Analysis
Singh et al. Semantic segmentation of bone structures in chest X-rays including unhealthy radiographs: A robust and accurate approach
CN115131290A (en) Image processing method
CN116958103A (en) COPD phenotype determination method and device, electronic equipment and storage medium
Feng et al. Research and application of tongue and face diagnosis based on deep learning
CN114418931B (en) Method and device for extracting residual lung lobes after operation, electronic equipment and storage medium
CN113553460B (en) Image retrieval method and device, electronic device and storage medium
CN116523914B (en) Aneurysm classification recognition device, method, equipment and storage medium
US20230274424A1 (en) Appartus and method for quantifying lesion in biometric image
CN114418931A (en) Method and device for extracting residual lung lobes after operation, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination