CN111326259A - Disease trend grade determining method, device, equipment and storage medium - Google Patents

Disease trend grade determining method, device, equipment and storage medium Download PDF

Info

Publication number
CN111326259A
CN111326259A CN202010260500.4A CN202010260500A CN111326259A CN 111326259 A CN111326259 A CN 111326259A CN 202010260500 A CN202010260500 A CN 202010260500A CN 111326259 A CN111326259 A CN 111326259A
Authority
CN
China
Prior art keywords
disease
lung
lobe
lung lobe
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010260500.4A
Other languages
Chinese (zh)
Inventor
李月
蔡杭
魏征
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010260500.4A priority Critical patent/CN111326259A/en
Publication of CN111326259A publication Critical patent/CN111326259A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Pulmonology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Physiology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for determining disease trend grades, wherein the method comprises the following steps: obtaining a lung lobe segmentation image; determining a first disease and a second disease of a focus area of each lung lobe in the lung lobe segmentation image, wherein the second disease can be unidirectionally converted into the first disease; determining a trend rating for the first disease for each lung lobe based on the known rating for the first disease and the second disease, respectively. The invention can predict the development trend of diseases on the lung lobes and can provide an auxiliary effect for doctors to determine a treatment scheme.

Description

Disease trend grade determining method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a disease trend grade determining method, a disease trend grade determining device and a disease trend grade determining storage medium.
Background
Both lung cancer and chronic obstructive pulmonary disease belong to chronic diseases, and the onset of the disease is slow. Chronic obstructive pulmonary disease (chronic obstructive pulmonary disease) is a common disease that can be prevented and treated, characterized by persistent respiratory symptoms and airflow limitation, caused by airway and/or alveolar abnormalities resulting from significant exposure to harmful particles or gases. Based on the existing clinical and technical methods, the early stage of lung cancer is pulmonary sarcoidosis, and the early stage of chronic diseases is small airway lesion. Pulmonary sarcoidosis or small airway lesion diseases have unobvious early onset and are not noticed by most patients, but once converted into lung cancer or chronic obstructive pulmonary disease, the curative effect and the curative cost are high.
If the development trend of lung cancer or chronic obstructive pulmonary disease on specific lung lobes can be predicted, the method has important significance for targeted therapy and operation of the lung lobes, and doctors can provide reasonable treatment schemes according to the predicted grades. However, the trend of disease progression on the lung lobes cannot be predicted at present.
Disclosure of Invention
The invention mainly aims to provide a disease trend grade determining method, a disease trend grade determining device and a disease trend grade determining storage medium, and aims to solve the problem that a disease development trend on lung lobes cannot be predicted.
To achieve the above object, in a first aspect, the present invention provides a disease tendency grade determination method, comprising:
obtaining a lung lobe segmentation image;
determining a first disease and a second disease of a focus area of each lung lobe in the lung lobe segmentation image, wherein the second disease can be unidirectionally converted into the first disease;
determining a trend rating for the first disease for each lung lobe based on the known rating for the first disease and the second disease, respectively.
Optionally, said determining a trend rating of said first disease for each lung lobe based on a known rating of said first disease and said second disease, respectively, comprises:
locating a first region of the first disease and a second region of the second disease of each lobe, respectively;
determining a trend rating of the first disease for each lung lobe based on a known rating of the first disease from a ratio of the first region to the second region of each lung lobe, respectively.
Optionally, the determining a trend rating of the first disease for each lung lobe according to a ratio of the first region to the second region of each lung lobe, respectively, based on the known rating of the first disease, includes:
obtaining a first volume of the first disease on each lobe of the lung according to the first region, respectively, and obtaining a second volume of the second disease on each lobe of the lung according to the second region, respectively;
calculating a ratio of the first volume to the second volume for each lung lobe separately;
determining a trend grade of the first disease of the corresponding lung lobe according to a ratio of the first volume and the second volume of each lung lobe respectively based on the grade of the first disease.
Optionally, the determining a trend rating of the first disease for each lung lobe according to a ratio of the first region to the second region of each lung lobe, respectively, based on the known rating of the first disease, includes:
determining a next grade lighter than a known grade of the first disease as a trend grade of the first disease in response to a ratio of the first region to the second region being greater than a preset threshold;
determining a previous grade more severe than the known grade of the first disease as a trend grade of the first disease in response to a ratio of the first area to the second area being less than or equal to a preset threshold.
Optionally, the first disease of the focal region of each lung lobe in the lung lobe segmentation image is chronic obstructive lung, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is small airway lesion; or
The first disease of the focal region of each lung lobe in the lung lobe segmentation image is lung cancer, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is a lung nodule.
Further, before determining the trend rating of the first disease for each lung lobe based on the known rating of the first disease and the second disease, respectively, the method further comprises:
and inputting each lung lobe segmentation image into a preset neural network to extract image features through a residual error network of the preset neural network, and determining the known grade of the first disease according to a classification network of the preset neural network.
Optionally, the acquiring the lung lobe segmentation image includes:
acquiring lung images, wherein the lung images comprise full inhalation phase lung images and full exhalation phase lung images;
respectively carrying out lung lobe segmentation on the full inhalation phase lung image and the full exhalation phase lung image to obtain a first lung lobe segmentation image and a second lung lobe segmentation image;
the determining the first disease and the second disease of the focal region of each lung lobe in the lung lobe segmentation image comprises:
determining the first disease according to the first lung lobe segmentation image or the second lung lobe segmentation image;
and determining the second disease according to the first lung lobe segmentation image and the second lung lobe segmentation image.
In a second aspect, the present invention provides a disease tendency level determination apparatus comprising:
the acquisition module is used for acquiring a lung lobe segmentation image;
the first determining module is used for determining a first disease and a second disease of a focus area of each lung lobe in the lung lobe segmentation image, and the second disease can be unidirectionally converted into the first disease;
a second determination module to determine a trend rating of the first disease for each lung lobe based on the known rating of the first disease and the second disease, respectively.
Optionally, the second determining module includes:
a positioning unit for positioning a first region of the first disease and a second region of the second disease of each lung lobe, respectively;
a determining unit, configured to determine a trend grade of the first disease of each lung lobe according to a ratio of the first region to the second region of each lung lobe, respectively, based on a known grade of the first disease.
Optionally, the determining unit is specifically configured to obtain a first volume of the first disease on each lobe of the lung according to the first region, and obtain a second volume of the second disease on each lobe of the lung according to the second region;
calculating a ratio of the first volume to the second volume for each lung lobe separately;
determining a trend grade of the first disease of the corresponding lung lobe according to a ratio of the first volume and the second volume of each lung lobe respectively based on the grade of the first disease.
Optionally, the determining unit is further specifically configured to determine, as the trend level of the first disease, a next level that is lighter than a known level of the first disease in response to a ratio of the first region to the second region being greater than a preset threshold;
determining a previous grade more severe than the known grade of the first disease as a trend grade of the first disease in response to a ratio of the first area to the second area being less than or equal to a preset threshold.
Optionally, the first disease of the focal region of each lung lobe in the lung lobe segmentation image is chronic obstructive lung, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is small airway lesion; or
The first disease of the focal region of each lung lobe in the lung lobe segmentation image is lung cancer, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is a lung nodule.
Further, the first determining module is further configured to input each lung lobe segmentation image into a preset neural network, so as to extract image features through a residual error network of the preset neural network, and determine a known level of the first disease according to a classification network of the preset neural network.
Optionally, the acquiring module is specifically configured to acquire a lung image, where the lung image includes a full inhalation phase lung image and a full exhalation phase lung image; and respectively carrying out lung lobe segmentation on the full inhalation phase lung image and the full exhalation phase lung image to obtain a first lung lobe segmentation image and a second lung lobe segmentation image.
Optionally, the first determining module is specifically configured to determine the first disease according to the first lung lobe segmentation image or the second lung lobe segmentation image; and determining the second disease according to the first lung lobe segmentation image and the second lung lobe segmentation image.
In a third aspect, the present invention also proposes a disease tendency level determination device comprising: a memory, a processor and a disease trend level determination program stored on the memory and executable on the processor, the disease trend level determination program being executed by the processor to perform the steps of the disease trend level determination method.
In a fourth aspect, the present invention also provides a computer-readable storage medium having stored thereon a disease trend level determination program which, when executed by a processor, implements the steps of the disease trend level determination method.
The invention provides a disease trend grade determining method, a device, equipment and a storage medium, after a lung lobe segmentation image is obtained, a first disease and a second disease of a focus area of each lung lobe in the lung lobe segmentation image can be determined, and the second disease can be unidirectionally converted into the first disease; therefore, the trend grade of the first disease of each lung lobe can be determined respectively based on the known grade of the first disease and the second disease, namely, the disease development trend on the lung lobe can be predicted, and an auxiliary effect can be provided for a doctor to determine a treatment scheme.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flow chart of a disease trend rating determination method according to an embodiment of the present invention;
fig. 3 is a schematic network structure diagram of a lung lobe segmentation method according to an embodiment of the present invention.
FIG. 4 is a functional block diagram of a disease trend grade determining apparatus according to a preferred embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present invention.
It should be noted that the method for detecting an emphysema grade according to the embodiment of the present invention may be used to improve efficiency and accuracy of detecting an emphysema grade, and an execution main body of the method may be any emphysema grade detection apparatus, for example, the method for detecting an emphysema grade may be executed by a terminal device, a server, or other processing devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like, and is not limited specifically herein.
As shown in fig. 1, the emphysema grade detecting apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the device configuration shown in fig. 1 does not constitute a limitation of the disease-trending device and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an emphysema level detecting program. The operating system is a program for managing and controlling hardware and software resources of the equipment, and supports the operation of the emphysema grade detection program and other software or programs.
In the device shown in fig. 1, the user interface 1003 is mainly used for data communication with a client; the network interface 1004 is mainly used for establishing communication connection with a server; the processor 1001 may be configured to call an emphysema level detection program stored in the memory 1005, and execute the emphysema level detection method according to the embodiment of the present invention.
In order to achieve the purpose of predicting the disease development trend on the lung lobes and providing assistance for the doctor to determine the treatment plan, the embodiment of the invention provides a disease trend grade determination method, as shown in fig. 2, which includes the following steps.
S10, obtaining a lung lobe segmentation image;
s20, determining a first disease and a second disease of a focus area of each lung lobe in the lung lobe segmentation image, wherein the second disease can be unidirectionally converted into the first disease;
s30, respectively determining the trend grade of the first disease of each lung lobe based on the known grade of the first disease and the second disease.
In an embodiment of the present invention, with respect to step S10, the lung lobe segmentation image includes: left lung image and right lung image. A left lung image comprising: the upper left leaf and the lower left leaf; a right lung image comprising: right upper lobe, right middle lobe, right lower lobe.
Before the step S10, that is, before the lung lobe segmentation image is acquired, the lung image needs to be segmented, and in some possible embodiments, the embodiment of the present invention may obtain the lung images at different viewing angles by taking CT (Computed Tomography). Correspondingly, a plurality of tomographic images, namely lung images, can be obtained at each viewing angle, and the plurality of lung images at the same viewing angle can be constructed to form a three-dimensional lung image. For example, the plurality of lung images at the same viewing angle may be stacked to obtain a three-dimensional lung image, or linear fitting or surface fitting may be performed to obtain a three-dimensional lung image.
In an embodiment of the present invention, a method for segmenting a lung image includes:
step 1: the method comprises the steps of obtaining lung lobe fissure characteristics of a lung image in a sagittal plane, lung lobe fissure characteristics of the lung image in a coronal plane and lung lobe fissure characteristics of the lung image in a transverse plane.
In some possible embodiments, the lung lobe fissure features of the lung images at different viewing angles may be extracted by means of a feature extraction process. The lobe slit feature is a feature for performing segmentation processing of each lobe region in the lung image, and for example, the lobe slit feature may be used to determine the position of a slit surface between the lobes, enabling the lobe segmentation.
The embodiment of the invention can respectively perform feature extraction processing on the lung images under the sagittal plane, the coronal plane and the transverse plane view angle to obtain the slit features of the lung images under the corresponding view angles, namely, the lung lobe slit feature of the lung image under the sagittal plane, the lung lobe slit feature under the coronal plane and the lung lobe slit feature under the transverse plane can be respectively obtained. In the embodiment of the present invention, the lung lobe slit features at each viewing angle may be represented in a matrix or vector form, and the lung lobe slit features may represent feature values of the lung image at each pixel point at the corresponding viewing angle.
In some possible implementations, the feature extraction process may be performed by a feature extraction neural network. For example, the neural network can be trained, the neural network can accurately extract the lung lobe fissure characteristics of the lung image, and the lung lobe segmentation is performed through the obtained characteristics. Under the condition that the precision of the lung lobe segmentation exceeds the precision threshold, the precision of the lung lobe fissure characteristics obtained by the neural network meets the requirement, at the moment, the network layer for performing segmentation in the neural network can be removed, and the reserved network part can be used as the characteristic extraction neural network of the embodiment of the invention. The feature extraction neural network may be a convolutional neural network, such as a residual error network, a pyramid feature network, and a U network, which are only exemplary and are not specific limitations of the present invention.
Step 2: and correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane.
In some possible embodiments, in the case that the lobe fissure characteristics at three viewing angles are obtained, the lobe fissure characteristics at the third viewing angle may be corrected by using the lobe fissure characteristics at two viewing angles, so as to improve the accuracy of the lobe fissure characteristics at the third viewing angle.
In one example, embodiments of the present invention can utilize the lobe slit characteristics in coronal and transverse views to correct the lobe slit characteristics in sagittal view. In other embodiments, another lobe slit feature may also be corrected by any two of the lobe slit features of the three viewing angles. For convenience of description, the following embodiments describe the correction of the third lung lobe slit characteristic by the first lung lobe slit characteristic and the second lung lobe slit characteristic. The first lung lobe slit characteristic, the second lung lobe slit characteristic and the third lung lobe slit characteristic respectively correspond to the lung lobe slit characteristics at three viewing angles in the embodiment of the present invention.
In some possible embodiments, the first and second lobe slit features may be converted to the viewing angle of the third lobe slit feature by using a mapping manner, and feature fusion is performed by using the two lobe slit features obtained by mapping and the third lobe slit feature, so as to obtain the corrected lobe slit feature.
And step 3: and segmenting the lung image by using the corrected lung lobe fissure characteristics.
In some possible embodiments, the lung lobe segmentation may be directly performed through the corrected lung lobe fissure characteristics, resulting in a segmentation result of the lung lobe fissure. Alternatively, in another embodiment, the feature fusion processing may be performed on the corrected lung lobe slit features and the third lung lobe slit features, and lung lobe segmentation may be performed based on the fusion result to obtain a segmentation result of the lung lobe slit. The segmentation result may include position information corresponding to each partition in the identified lung image. For example, the lung image may include five lung lobe regions, which are the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe and the lower left lobe, respectively, and the obtained segmentation result may include the position information of the five lung lobes in the lung image. The segmentation result may be represented in a mask feature manner, that is, the segmentation result obtained in the embodiment of the present invention may be represented in a mask form, for example, the embodiment of the present invention may allocate unique corresponding mask values, such as 1, 2, 3, 4, and 5, to the five lung lobe regions respectively, and a region formed by each mask value is a position region where a corresponding lung lobe is located. The mask values described above are merely exemplary, and other mask values may be configured in other embodiments.
Based on the embodiment, the lung lobe fissure characteristics under three visual angles can be fully fused, the information content and accuracy of the corrected fissure characteristics are improved, and the accuracy of the lung lobe segmentation result is further improved.
In order to explain the embodiments of the present invention in detail, the respective processes of the embodiments of the present invention will be explained below.
In an embodiment of the present invention, the method for acquiring the lobe slit characteristic of the lung image in the sagittal plane, the lobe slit characteristic in the coronal plane, and the lobe slit characteristic in the transverse plane includes:
obtaining a plurality of series of lung images in sagittal, coronal and transverse planes; and respectively extracting lung lobe fissure characteristics of the multi-sequence lung images in the sagittal plane, the coronal plane and the transverse plane to obtain lung lobe fissure characteristics in the sagittal plane, lung lobe fissure characteristics in the coronal plane and lung lobe fissure characteristics in the transverse plane.
The embodiment of the invention can firstly acquire the multi-sequence lung images under three visual angles, and as described in the embodiment, the multi-layer lung images (multi-sequence images) of the lung images under different visual angles can be acquired in a CT imaging mode, and the three-dimensional lung images can be acquired through the multi-layer lung images under each visual angle.
In the case of obtaining a multi-sequence lung image at three viewing angles, feature extraction processing may be performed on each lung image, for example, by performing feature extraction processing on the lung image at each viewing angle through the above-described feature extraction neural network, to obtain lung lobe slit features of each image at the three viewing angles, such as a lung lobe slit feature in a sagittal plane, a lung lobe slit feature in a coronal plane, and a lung lobe slit feature in a transverse plane. Because each view angle can comprise a plurality of lung images, the embodiment of the invention can execute the feature extraction processing of the lung images in parallel through a plurality of feature extraction neural networks, thereby improving the feature extraction efficiency.
Fig. 3 is a schematic network structure diagram of a lung lobe segmentation method according to an embodiment of the present invention. As shown in fig. 3, the network for performing the feature extraction process according to the embodiment of the present invention may be a U network (U-net), or may be another convolutional neural network capable of performing feature extraction.
Under the condition of obtaining the lobe fissure characteristics of the lung image at each view angle, the third lobe fissure characteristic can be corrected by using the lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane, and the process can include:
mapping the arbitrary two lung lobe fissure features to the view angle of the third lung lobe fissure feature; and correcting the third lung lobe fissure characteristic by using the mapped lung lobe fissure characteristics of any two.
For convenience of description, the following description will be given taking an example in which the first and second lobe slit features correct the third lobe slit feature.
Because the extracted characteristics of the lung lobe fissures are different at different viewing angles, the embodiment of the invention can convert the lung lobe fissure characteristic mapping at three viewing angles into one viewing angle. Wherein the method for mapping the two arbitrary lung lobe fissure features to the view angle of the third lung lobe fissure feature is as follows: and mapping the lung lobe fissure characteristics of the multi-sequence lung images in any two of the sagittal plane, the coronal plane and the transverse plane to the view angle of the third lung lobe fissure characteristic. That is, the first and second lobe slit features may be mapped to the viewing angle at which the third lobe slit feature is located. And through mapping conversion of the visual angle, the characteristic information of the visual angle before mapping can be fused in the lung lobe fissure characteristics obtained after mapping.
As described in the foregoing embodiments, the embodiments of the present invention may obtain a plurality of lung images at each viewing angle, where the plurality of lung images correspondingly have a plurality of lung lobe fissure features. And each characteristic value in the lung lobe fissure characteristic corresponds to each pixel point of the corresponding lung image one by one.
According to the embodiment of the invention, the position mapping relationship between the pixel points in the lung image when the visual angle is converted to another visual angle can be determined according to the three-dimensional vertical lung image formed by the lung images under one visual angle, for example, when a certain pixel point is switched from the first position of the first visual angle to the second position of the second visual angle, at this time, the characteristic value corresponding to the first position under the first visual angle is mapped to the second position. By the embodiment, the mapping conversion between the lung lobe crack characteristics of the lung images under different visual angles can be realized.
In some possible embodiments, in a case where the lobe slit features of three viewing angles are mapped to the same viewing angle, the mapped two lobe slit features may be used to perform correction processing on the third lobe slit feature, so as to improve the information content and accuracy of the third lobe slit feature.
In an embodiment of the present invention, the method for correcting the third lung lobe slit characteristic by using the any two mapped lung lobe slit characteristics includes:
respectively carrying out space attention feature fusion by using the mapped two lung lobe fissure features and the mapped third lung lobe fissure feature to obtain a first fusion feature and a second fusion feature; and obtaining the corrected third lung lobe fissure characteristic according to the first fusion characteristic and the second fusion characteristic.
In the embodiment of the present invention, the feature after the first lung lobe slit feature mapping may be referred to as a first mapping feature, and the feature after the second lung lobe slit feature mapping may be referred to as a second mapping feature. In the case of obtaining the first mapped feature and the second mapped feature, a spatial attention feature fusion between the first mapped feature and the third lung lobe slit feature may be performed to obtain a first fused feature, and a spatial attention feature fusion between the second mapped feature and the third lung lobe slit feature may be performed to obtain a second fused feature.
The method for performing spatial attention feature fusion by using the mapped two arbitrary lung lobe fissure features and the mapped third lung lobe fissure feature respectively to obtain a first fusion feature and a second fusion feature comprises the following steps:
respectively connecting the arbitrary two lung lobe fissure characteristics with the third lung lobe fissure characteristic to obtain a first connecting characteristic and a second connecting characteristic; performing a first convolution operation on the first connection feature to obtain a first convolution feature, and performing the first convolution operation on the second connection feature to obtain a second convolution feature; performing a second convolution operation on the first convolution characteristic to obtain a first attention coefficient, and performing a second convolution operation on the second convolution characteristic to obtain a second attention coefficient; the first fusion feature is obtained using the first convolution feature and the first attention coefficient, and the second fusion feature is obtained using the second convolution feature and the second attention coefficient.
In some possible embodiments, as shown in fig. 3, the spatial attention feature fusion processing may be performed by a network module of a spatial attention mechanism, and the embodiment of the present invention adopts the spatial attention mechanism in consideration of the difference in importance of the characteristics of the lung lobe fissures at different positions. Wherein, the convolution processing based on the attention mechanism can be realized through a spatial attention neural network (attention), and important features are further highlighted in the obtained fusion features. The importance of each position of the spatial feature can be adaptively learned in the training process of the spatial attention neural network, and an attention coefficient of the feature object at each position is formed, for example, the coefficient can represent a coefficient value of a [0,1] interval, and the larger the coefficient, the more important the feature at the corresponding position is.
In the process of performing the spatial attention fusion process, a first connection feature may be obtained by performing a connection process on the first mapping feature and the third lung lobe fissure feature, and a second connection feature may be obtained by performing a connection process on the second mapping feature and the third lung lobe fissure feature, where the connection process may be a connection (association) in a channel direction. In an embodiment of the present invention, the dimensions of the first mapping feature, the second mapping feature, and the third lung lobe fissure feature may be all identified as (C/2, H, W), where C represents the number of channels of each feature, H represents the height of the feature, and W represents the width of the feature. Correspondingly, the scale of the first connection feature and the second connection feature obtained by the connection process may be represented as (C, H, W).
In the case of obtaining the first connection feature and the second connection feature, a first convolution operation may be performed on each of the first connection feature and the second connection feature, for example, the first convolution operation may be performed by a convolution kernel of 3 × 3 using convolution layer a, and then batch normalization (bn) and activation function (relu) processing may be performed to obtain a first convolution feature corresponding to the first connection feature and a second convolution feature corresponding to the second connection feature. The scales of the first convolution feature and the second convolution feature can be expressed as (C/2, H, W), parameters in the feature map can be reduced through the first convolution operation, and subsequent calculation cost is reduced.
In some possible embodiments, in the case of obtaining the first convolution feature and the second convolution feature, a second convolution operation and a sigmoid function process may be performed on the first convolution feature and the second convolution feature, respectively, to obtain a corresponding first attention coefficient and a corresponding second attention coefficient, respectively. Wherein the first attention coefficient may represent the degree of importance of the characteristic of each element of the first convolution characteristic and the second attention coefficient may represent the degree of importance of the characteristic of the element in the second convolution characteristic.
As shown in fig. 3, for the first convolution feature or the second convolution feature, the second convolution operation may be performed by using two convolution layers B and C, where after the convolution layer B is processed by a convolution kernel of 1 × 1, batch normalization (bn) and activation function (relu) processing are performed to obtain a first intermediate feature, and the scale of the first intermediate feature may be represented as (C/8, H, W), and then the convolution operation of 1 × 1 convolution kernel is performed on the first intermediate feature by the second convolution layer C to obtain a second intermediate feature of (1, H, W). Further, the second intermediate feature map may be processed by using a sigmoid function to perform an activation function, so as to obtain an attention coefficient corresponding to the first convolution feature or the second feature, where a coefficient value of the attention coefficient may be a value in a range of [0,1 ].
The above second convolution operation can perform dimensionality reduction processing on the first connection feature and the second connection feature to obtain a single-channel attention coefficient.
In some possible embodiments, in the case of obtaining a first attention coefficient corresponding to the first convolution feature and a second attention coefficient corresponding to the second convolution feature, a product process may be performed on the first convolution feature and the first attention coefficient, and the product result may be added to the first convolution feature to obtain the first fusion feature. And performing product processing on the second convolution characteristic and the second attention coefficient matrix, and adding the product result and the second convolution characteristic to obtain a second fusion characteristic. Wherein the product processing (mul) may be the corresponding element multiplication, and the feature addition (add) may be the corresponding element addition. By the method, effective fusion of the features under three visual angles can be realized.
Alternatively, in other embodiments, a feature obtained by multiplying the first convolution feature by the first attention coefficient may be added to the first convolution feature, and several convolution operations may be performed on the added feature to obtain the first fused feature; and adding the feature multiplied by the second convolution feature and the second attention coefficient to the second convolution feature, and performing a plurality of convolution operations on the added feature to obtain the second fusion feature. By the method, the accuracy of the fused features can be further improved, and the content of fused information can be improved.
In the case of obtaining the first fused feature and the second fused feature, the corrected third lobe fissure feature may be obtained by using the first fused feature and the second fused feature.
In some possible embodiments, since the first fused feature and the second fused feature respectively include feature information at three viewing angles, the corrected third lung lobe fissure feature may be obtained directly by connecting the first fused feature and the second fused feature and performing a third convolution operation on the connected features. Or, the first fused feature, the second fused feature, and the third lung lobe slit feature may be connected, and a third convolution operation may be performed on the connected features, so as to obtain a corrected third lung lobe slit feature.
Wherein the third convolution operation may include a packet convolution process. Further fusion of the feature information in each feature may be further achieved by a third convolution operation. As shown in fig. 3, the third convolution operation of the embodiment of the present invention may include a packet convolution d (depth wise conv), wherein the packet convolution may speed up the convolution speed and improve the accuracy of the convolution characteristics.
In the case where the corrected third lung lobe slit feature is obtained by the third convolution operation, the lung image may be segmented using the corrected lung lobe slit feature. According to the embodiment of the invention, the segmentation result corresponding to the corrected lung lobe fissure characteristic can be obtained in a convolution mode. As shown in fig. 3, the embodiment of the present invention may input the corrected lung lobe fissure features into the convolution layer E, and perform standard convolution through a 1 × 1 convolution kernel to obtain a segmentation result of the lung image. As described in the above embodiment, the segmentation result may indicate the position regions where the five lung lobes in the lung image are located. As shown in fig. 3, each lung lobe region in the lung image is distinguished by filling color.
Based on the above configuration, the lung lobe segmentation method based on multiple viewing angles provided by the embodiment of the invention can solve the technical problems that information is lost and the lung lobes cannot be accurately segmented because the lung lobes are not segmented by fully utilizing information of other viewing angles.
As described in the above embodiments, the embodiments of the present invention may be implemented by a neural network, and as shown in fig. 4, the neural network for performing the lung lobe segmentation method in the multi-view according to the embodiments of the present invention may include a feature extraction neural network, a spatial attention neural network, and a segmentation network (including convolutional layers D and E).
The embodiment of the invention can comprise three feature extraction neural networks which are respectively used for extracting the characteristics of the lung lobe fissures under different visual angles. Among them, the three feature extraction networks may be referred to as a first branch network, a second branch network, and a third branch network. The three branch networks in the embodiment of the invention have the same structure, and the input images of the branch networks are different from one another. For example, a lung image sample of a sagittal plane is input to the first branch network, a lung image sample of a coronal plane is input to the second branch network, and a lung image sample of a transverse plane is input to the third branch network, so that feature extraction processing of the lung image sample at each view angle is performed respectively.
Specifically, in the embodiment of the present invention, the process of training the feature extraction neural network includes:
acquiring training samples under a sagittal plane, a coronal plane and a cross section, wherein the training samples are lung image samples with marked lung lobe fissure characteristics; performing feature extraction on a lung image sample under a sagittal plane by using the first branch network to obtain a first predicted lung lobe fissure feature; performing feature extraction on the lung image sample under the coronal plane by using the second branch network to obtain a second predicted lung lobe fissure feature; performing feature extraction on the lung image sample under the cross section by using the third branch network to obtain a third predicted lung lobe fissure feature; respectively obtaining network losses of the first branch network, the second branch network and the third branch network by using the first predicted lung lobe fissure characteristic, the second predicted lung lobe fissure characteristic and the third predicted lung lobe fissure characteristic and the corresponding marked lung lobe fissure characteristic, and adjusting parameters of the first branch network, the second branch network and the third branch network by using the network losses.
As described in the foregoing embodiment, the first branch network, the second branch network, and the third branch network are respectively used to perform feature extraction processing on lung image samples in a sagittal plane, a coronal plane, and a transverse plane, so that predicted features, that is, a first predicted lobe fissure feature, a second predicted lobe fissure feature, and a third predicted lobe fissure feature, can be obtained correspondingly.
Under the condition that the predicted lung lobe fissure features are obtained, the network losses of the first branch network, the second branch network and the third branch network can be obtained by respectively using the first predicted lung lobe fissure feature, the second predicted lung lobe fissure feature and the third predicted lung lobe fissure feature and the corresponding marked lung lobe fissure features. For example, the loss function of the embodiment of the present invention may be a logarithmic loss function, the network loss of the first branch network may be obtained by the first predicted lung lobe fissure characteristic and the marked real lung lobe fissure characteristic, the network loss of the second branch network may be obtained by the second predicted lung lobe fissure characteristic and the marked real lung lobe fissure characteristic, and the network loss of the third branch network may be obtained by the third predicted lung lobe fissure characteristic and the marked real lung lobe fissure characteristic.
In the case of obtaining the network loss of each of the branch networks, parameters of the first branch network, the second branch network, and the third branch network may be adjusted according to the network loss of each of the networks until a termination condition is satisfied. In this embodiment of the present invention, network parameters, such as convolution parameters, of the first branch network, the second branch network, and the third branch network may be adjusted simultaneously by using a network loss of any branch of the first branch network, the second branch network, and the third branch network. Therefore, the network parameters at any visual angle are related to the characteristics at the other two visual angles, the correlation between the extracted lung lobe fissure characteristics and the lung lobe fissure characteristics at the other two visual angles can be improved, and the primary fusion of the lung lobe fissure characteristics at each visual angle can be realized.
In addition, the training termination condition of the feature extraction neural network is that the network loss of each branch network is smaller than the first loss threshold, which indicates that each branch network of the feature extraction neural network can accurately extract the lung lobe fissure features of the lung image at the corresponding view angle.
Under the condition that the training is finished, the characteristic extraction neural network, the spatial attention neural network and the segmentation network can be used for simultaneously training, and the network loss of the whole neural network is determined by using the segmentation result output by the segmentation network and the corresponding marking result in the marked lung lobe fissure characteristics. And further feeding back and adjusting the characteristics by using the network loss of the whole neural network to extract network parameters of the neural network, the spatial attention neural network and the segmentation network until the network loss of the whole neural network is less than a second loss threshold value. The first loss threshold value is larger than or equal to the second loss threshold value, so that the network accuracy of the network can be improved.
When the neural network of the embodiment of the invention is applied to lung lobe segmentation based on multiple visual angles, lung images under different visual angles of the same lung can be respectively and correspondingly input into the three branch networks, and finally, a final segmentation result of the lung image is obtained through the neural network.
In summary, the lung lobe segmentation method and device provided by the embodiments of the present invention can fuse multi-view feature information to perform lung lobe segmentation of a lung image, and solve the problem that information is lost and lung lobes cannot be accurately segmented due to insufficient use of information of other views to segment lung lobes.
In addition, the embodiment of the present invention may also perform lung lobe segmentation by a lung lobe segmentation apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored by the memory to perform the lung lobe segmentation method in the above embodiments.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present invention may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
In an embodiment of the present invention, in the step 20, a first disease and a second disease of a focal region of each lung lobe in the lung lobe segmentation image are determined: the first disease of the focal region of each lung lobe in the lung lobe segmentation image is slow obstructive lung, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is small airway lesion. Or the first disease of the focal region of each lung lobe in the lung lobe segmentation image is lung cancer, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is a lung nodule. The detection of the chronic obstructive pulmonary disease and pulmonary nodules may be performed by conventional detection methods, and the present invention is not described in the examples.
Specifically, in step S20, a first disease and a second disease associated with the first disease in the lesion regions of the upper left lobe and the lower left lobe of the left lung image need to be determined, respectively; and determining a first disease and a second disease associated with the first disease for lesion regions of an upper right lobe, a middle right lobe, and a lower right lobe, respectively, of the right lung image.
In an embodiment of the invention, if the first disease is chronic obstructive pulmonary disease, the second disease is a small airway lesion. Acquiring a lung CT image before the acquired lung lobe segmentation image, wherein the CT image comprises: full inspiratory lung images and full expiratory lung images. Correspondingly, the obtained lung lobe segmentation image may include segmenting the full inhalation phase lung image and the full exhalation phase lung image respectively to obtain a first lung lobe segmentation image and a second lung lobe segmentation image; the first lung lobe segmentation image is a lung lobe segmentation result of a full inhalation phase lung image, and the second lung lobe segmentation image is a lung lobe segmentation result of a full exhalation phase lung image. The method for determining the first disease and the second disease associated with the first disease of the lesion region of each lung lobe in the lung lobe segmentation image is as follows: determining the first disease from the first or second lung lobe segmentation images; determining the second disease from the first and second lung lobe segmentation images; wherein the first disease is chronic obstructive pulmonary disease and the second disease is small airway lesion.
Additionally, if the first disease is lung cancer and the second disease is a lung nodule, prior to the obtaining of the lung lobe segmentation image, a lung CT image may be obtained, the CT image including: holding breath after one inhalation completes the lung image obtained by whole lung scanning. Correspondingly, a lung segmentation image of the lung image, that is, each lung lobe image, can be obtained by the lung image segmentation process. The method for determining the first disease and the second disease associated with the first disease of the lesion region of each lung lobe in the lung lobe segmentation image is as follows: a corresponding first disease and an associated second disease are determined from each lung lobe image. For example, each lung lobe image corresponding to the lung segmentation image may be input to a neural network trained to recognize a lung nodule and a lung cancer, and a first disease (lung cancer) and a related second disease (lung nodule) of each lung lobe may be detected by the neural network, and a lesion region corresponding to each of the first disease and the second disease may preferably be obtained. The neural network can comprise lung lobe segmentation image feature extraction, a region candidate network and a classification network, the lung lobe features are obtained by performing feature extraction processing on the lung lobe images through the feature extraction network, focus regions of a first disease and a second disease in the lung lobe images are obtained by performing convolution processing on the lung lobe features through the region candidate network, and the disease types of the focus regions are classified and identified through the classification network, so that the disease types (the first disease or the second disease) corresponding to the lung lobe segmentation images are obtained.
It should be noted that, in the process of detecting the first disease and the associated second disease of each lung lobe, each lung lobe does not necessarily have the first disease and the second disease, and there may be a case where only the second disease exists or a case where neither the first disease nor the second disease exists. The embodiment of the invention can use different disease marks to indicate different diseases and whether the diseases exist.
In an embodiment of the present invention, before the step S30, the method further includes: determining a known grade of the first disease. The step of determining a known grade of the first disease comprises: and inputting each lung lobe segmentation image into a preset neural network to extract image features through a residual error network of the preset neural network, and determining the known grade of the first disease according to a classification network of the preset neural network.
In the step of determining the known grade of the first disease, the known grades of the first disease of the lesion regions of the upper left lobe and the lower left lobe of the left lung image need to be determined respectively; and determining the known grade of the first disease of the lesion region of the upper right lobe, the middle right lobe and the lower right lobe of the right lung image respectively. That is, the known grade of the first disease is determined in step 103, which is also the known grade of the first disease for each lobe of the lung.
In the embodiment of the present invention, the neural network may be used to detect the known level of the first disease of each lung lobe, for example, each segmented lung lobe image obtained by segmentation may be input into the neural network, and the known level of the first disease of each lung lobe image may be detected by classification through processing of the neural network. The neural network may include a convolutional neural network, for example, feature extraction (for example, implemented by a residual error network) of the lung lobe segmentation image may be performed, and after the lung lobe features are obtained, detection of the first disease level corresponding to the lung lobe features is performed through a classification network (multiple classifiers). Different classification levels may be set for different first diseases.
If the first disease is chronic obstructive pulmonary disease, according to the global initiative for chronic obstructive pulmonary disease (GOLD), the chronic obstructive pulmonary disease is ranked as: GOLD0 grade (absent), GOLD 1 grade (mild), GOLD 2 grade (moderate), GOLD 3 grade (severe), and GOLD 4 grade (extremely heavy).
In an embodiment of the present invention, a method for determining each lung lobe on each lung lobe according to the each lung lobe respectively includes: respectively determining a first volume of a slow obstructive lung area on each lung lobe and a total volume of each lung lobe, and respectively determining the grade of the slow obstructive lung on each lung lobe according to the ratio of the first volume to the total volume.
For example, if four thresholds are set, a first threshold, a second threshold, a third threshold and a fourth threshold, and the ratio of the first volume to the total volume is smaller than the first threshold, the chronic obstructive pulmonary disease is classified as GOLD0 (absent); if the ratio of the first volume to the total volume is between a first threshold and a second threshold, the grade of the chronic obstructive pulmonary disease is GOLD 1 grade (mild); if the ratio of the first volume to the total volume is between a second threshold and a third threshold, the grade of the slow obstructive lung is GOLD 2 grade (medium); if the ratio of the first volume to the total volume is between a third threshold and a fourth threshold, the slow-obstructive lung is rated GOLD 3 (severe); if the ratio of the first volume to the total volume is greater than a fourth threshold, the slow-obstructive lung is rated GOLD 4 (very heavy). Wherein the first threshold, the second threshold, the third threshold, and the fourth threshold may be 20%, 30%, 50%, and 80%, respectively.
For example, if the ratio of the first volume of the upper left-lobe airway slowly-obstructing lung region to the total volume of the upper left-lobe is between the second threshold and the third threshold, the slowly-obstructing lung is ranked GOLD 2 (medium). If the first disease is lung cancer, the grade of lung cancer, comprising: stage I, stage II, stage III, stage IV and stage III, IV.
And (3) stage I: the tumor is small, no lymph node metastasis exists, and the operation can be completely performed. The tumor is divided into IA stage and IB stage according to the size of the tumor, the smaller one is IA stage, and the larger one is IB stage.
And (2) in a stage II: or divided into two subtypes of stage IIA and stage IIB. Phase IIA includes two cases: one is a slightly larger tumor with no adjacent lymph node metastasis, and the other is a smaller tumor with peripheral lymph node metastasis. Stage IIB refers to larger tumors with lymph node metastasis, or large tumors with or without involvement of the lung's surrounding structures, but without lymph node metastasis.
Stage III: it is divided into stages IIIA and IIIB. Many stage IIIA and almost all stage IIIB tumors are difficult or impossible to remove by surgery. For example, a tumor affects the mediastinal lymph nodes, or a tumor invades adjacent structures within the lung. There is also a case where the tumor cannot be completely excised once and can be taken out in multiple times due to various factors, which makes it difficult to completely remove the tumor.
Stage IV: the method comprises the following steps: cancer cells metastasize to multiple sites in the contralateral lung, or to fluid accumulation around the lung or around the heart, or to other sites in the body via blood flow. Once in the bloodstream, cancer cells can metastasize to any part of the body, but more commonly metastasize to the brain, bone, liver and adrenal glands.
Stage III and IV: lung cancer belongs to middle and advanced stages, the operation can not be radically cured, generally, radiotherapy and chemotherapy are the main treatment modes, the pain of patients is basically reduced in the advanced stage, the palliative treatment is mainly performed, and when the lung cancer has the following conditions, surgical resection can not be performed, including: the tumor metastasizes to supraclavicular lymph nodes, or the tumor affects vital organs in the thoracic cavity, such as the heart, large vessels, or major airways.
With the above configuration, the grade of the first disease corresponding to each lung lobe in the lung lobe segmentation image can be obtained.
For the embodiment of the present invention, the step S30, the determining trend grades of the first disease for each lung lobe respectively based on the known grade of the first disease and the second disease, includes: locating a first region of the first disease and a second region of the second disease of each lobe, respectively; determining a trend rating of the first disease for each lung lobe based on a known rating of the first disease from a ratio of the first region to the second region of each lung lobe, respectively. Wherein the known grade of the first disease is a disease grade represented by the current lung image, and the trend grade of the first disease is used for representing the development condition of the future disease, that is, whether the first disease is good or bad can be determined by the trend grade of the first disease, and the good or bad degree can be represented by the grade of the trend grade.
In an embodiment of the present invention, the determining a trend rating of the first disease for each lung lobe according to a ratio of the first region to the second region of each lung lobe based on the known rating of the first disease comprises: obtaining a first volume of the first disease on each lobe of the lung according to the first region, respectively, and obtaining a second volume of the second disease on each lobe of the lung according to the second region, respectively; calculating a ratio of the first volume to the second volume for each lung lobe separately; determining a trend grade of the first disease of the corresponding lung lobe according to a ratio of the first volume and the second volume of each lung lobe respectively based on the grade of the first disease.
Wherein the first volume is a total volume of the first disease over the first region of each lobe; the second volume is the total volume of the second disease over the second region of each lung lobe.
The first volume and the second volume are a first volume of a focal region of the first disease and a second volume of a focal region of the second disease, respectively, for each lung lobe. That is, the first volume, comprises: the first volume of the focal region of the first disease of the upper left lobe, the first volume of the focal region of the first disease of the lower left lobe, the first volume of the focal region of the first disease of the upper right lobe, the first volume of the focal region of the first disease of the middle right lobe, and the first volume of the focal region of the first disease of the lower right lobe. A second volume comprising: a second volume of a focal region of the second disease of the upper left lobe, a second volume of a focal region of the second disease of the lower left lobe, a second volume of a focal region of the second disease of the upper right lobe, a second volume of a focal region of the second disease of the middle right lobe, a second volume of a focal region of the second disease of the lower right lobe.
Alternatively, in an embodiment of the present invention, the determining a trend rating of the first disease for each lung lobe according to a ratio of the first region to the second region of each lung lobe based on the known rating of the first disease includes: determining a next grade lighter than a known grade of the first disease as a trend grade of the first disease in response to a ratio of the first volume to the second volume being greater than a preset threshold; determining a previous grade more severe than a known grade of the first disease as a trending grade of the first disease in response to a ratio of the first volume to the second volume being less than or equal to a preset threshold.
Further, in an embodiment of the present invention, the determining a trend rating of the first disease for each lung lobe according to a ratio of the first region to the second region of each lung lobe, respectively, based on the known rating of the first disease, includes: determining a next grade lighter than a known grade of the first disease as a trend grade of the first disease in response to a ratio of the first region to the second region being greater than a preset threshold; determining a previous grade more severe than the known grade of the first disease as a trend grade of the first disease in response to a ratio of the first area to the second area being less than or equal to a preset threshold.
In an embodiment of the invention, a first region of the first disease and a second region of the second disease may be located separately for each lung lobe. When the first disease of the lesion region of each lung lobe in the lung lobe segmentation image is lung cancer, the second disease of the lesion region of each lung lobe in the lung lobe segmentation image is a lung nodule, and when the first disease of the lesion region of each lung lobe in the lung lobe segmentation image is slow-obstructive lung, the second disease of the lesion region of each lung lobe in the lung lobe segmentation image is a small airway lesion.
The characteristics of the images are very obvious whether the lung cancer or the lung nodule exists. If the first disease is lung cancer, the lung cancer region (first region) can be located according to the morphological characteristics of the lung cancer; if the second disease is a lung nodule, the lung cancer region (second region) may be located based on morphological features of the lung nodule. As described in the above embodiments, the prediction of the first disease and the second disease and the determination of the focal region of the first disease and the second disease may be achieved through a neural network capable of identifying the first disease and the second disease corresponding to the lung lobe segmentation image.
If the first disease is chronic obstructive pulmonary disease and the second disease is small airway lesion, a region of the chronic obstructive pulmonary disease (first region) or a region of the small airway lesion (second region) cannot be visually identified, and particularly, the region of the small airway lesion is not visually obvious.
The method of locating the first region of the slow obstructive lung of each lobe is: and judging whether the CT value on the lung lobe is a chronic obstructive pulmonary region or not according to the lung lobe with the CT value and a set threshold value. The set threshold value is a set CT value, the CT value of the slow-obstruction lung area is basically unchanged during deep inhalation, other normal areas (non-slow-obstruction lung areas) enter air, the CT value of the air is 1024HU, and the CT value of the slow-obstruction lung area is close to 1024HU due to the fact that the slow-obstruction lung area is filled with the air. In the medical field, the set threshold is generally selected to be-950 HU, and if the CT value of the lung lobe is less than-950 HU, the lung is judged to be a slow obstructive lung region, and if the CT value of the lung lobe is greater than or equal to-950 HU, the lung is judged not to be a slow obstructive lung region. In other disclosed papers or in some possible embodiments, the set threshold may fluctuate, and the invention is not particularly limited to the set threshold, and the set threshold can be adjusted as appropriate by those skilled in the art.
As described above, locating the second region of the small airway lesion of each lung lobe requires a full inspiratory phase lung image as well as a full expiratory phase lung image. The method of locating the second region of the small airway lesion of each lung lobe is: acquiring a first lung lobe segmentation image of a full gas absorption phase lung image; acquiring a second lung lobe segmentation image of the full expiratory phase lung image; respectively extracting a plurality of full gas absorption single lung lobes with CT values in the first lung lobe segmentation image; respectively extracting a plurality of full expiratory phase single lung lobes with CT values in the second lung lobe segmentation image; respectively registering the full inhalation phase single lung lobe and the full exhalation phase single lung lobe at corresponding positions to obtain a registered full inhalation phase single lung lobe and a registered full exhalation phase single lung lobe; respectively comparing the CT values of the registered full inhalation phase single lung lobes and the registered full exhalation phase single lung lobes with an inhalation phase set threshold and an exhalation phase set threshold; if the CT value of the registered full inhalation phase single lung lobe is smaller than the inhalation phase set threshold and the CT value of the registered full exhalation phase single lung lobe is smaller than the exhalation phase set threshold, the region is considered to have small airway lesions; otherwise, the region is considered to have no small airway lesions.
The method for segmenting the lung image comprises the steps of segmenting the full inhalation phase lung image to obtain a first lung lobe segmentation image, and segmenting the full exhalation phase lung image to obtain a second lung lobe segmentation image.
In the embodiment of the invention, the full inhalation phase lung image and the full exhalation phase lung image are the lung images of a patient, and the lung images shot by the influence equipment are utilized when the air volume of the lung is kept to be maximum under deep inhalation during the full inhalation phase lung image. Similarly, the full expiratory phase lung image is a lung image shot by using the influence equipment when the air volume of the lung is kept to be minimum under deep expiration. Full inspiratory and expiratory phase pulmonary images are available to imaging physicians at hospitals by means of imaging devices (e.g., CT).
The full inspiratory phase single lung lobe and the full expiratory phase single lung lobe are registered by using an elastic registration algorithm or by using a VGG network (VGG-net) in deep learning. The embodiment of the present invention does not limit a specific registration algorithm.
In the embodiment of the present invention, the threshold is set for the inhalation phase and the threshold is set for the exhalation phase, and the setting can be performed by those skilled in the art as needed. For example, the inspiratory phase setting threshold may be set to-950 HU, the expiratory phase setting threshold may be set to-856 HU, the registered total inspiratory phase monopneumatic lobes are compared to the inspiratory phase setting threshold-950 HU, and the CT values of the registered total expiratory phase monopneumatic lobes are compared to the expiratory phase setting threshold-856 HU.
In the embodiment of the invention, for example, if the inhalation phase setting threshold value can be set to-950 HU, the exhalation phase setting threshold value can be set to-856 HU, the CT value of the full inhalation phase monopneumatic lobe after the registration is smaller than the inhalation phase setting threshold value-950 HU, and the CT value of the full exhalation phase monopneumatic lobe after the registration is smaller than the exhalation phase setting threshold value-856 HU, then the region is considered to have small airway lesion; otherwise, the region is considered to have no small airway lesions.
In the present invention, a first volume of the lung cancer on each lobe is derived from the first region of the lung cancer, respectively, and a second volume of the lung nodule on each lobe is derived from the second region of the lung nodule, respectively. On the image, the characteristics of the lung cancer and the lung nodule are very obvious, the volumes of the lung cancer and the lung nodule are well determined, the lung cancer and the lung nodule can be approximated to be circular or elliptical, and volume calculation is carried out, so that a first volume of the lung cancer on each lung lobe and a second volume of the lung nodule on each lung lobe are obtained. The first volume is the total volume of all lung cancers on one lobe and the second volume is the total volume of all lung nodules on this lobe. For example, in the above embodiment, a first region of lung cancer and a second region of lung nodule in each lung lobe may be obtained, and the first region and the second region may be fitted to a circle (which may be regarded as a sphere from a three-dimensional structure) by means of surface fitting, so that a first volume of the first region corresponding to lung cancer and a second volume of the second region corresponding to lung nodule may be conveniently calculated according to a fitting result.
In the lung cancer and the lung nodule, based on the known grade of the lung cancer, the trend grade of the first disease of the corresponding lung lobe is determined according to the ratio of the first volume and the second volume of each lung lobe and a preset threshold value.
For example, on the upper left lobe, based on a known grade of the lung cancer, a trending grade of the lung cancer for the upper left lobe may be determined from a ratio of the first volume of the upper left lobe and the second volume of the upper left lobe to a preset threshold.
The method for determining the trend grade of the first disease of the corresponding lung lobe according to the ratio of the first region and the second region of each lung lobe and a preset threshold value respectively based on the known grade of the first disease comprises the following steps: if the ratio is greater than the preset threshold, the trend grade is the next grade of the known grade of the first disease; otherwise, the trend rating is a rating immediately preceding the known rating for the first disease; wherein the previous grade is more severe than the grade of the first disease.
For example, on the upper left lobe, the lung cancer grade is stage IB, the ratio of the first volume of the upper left lobe lung cancer region divided by the second volume of the upper left lobe lung nodule region is smaller than or equal to the preset threshold, which indicates that although the upper left lobe lung cancer region is small, the lung nodule region is large, and the probability of the lung nodule region transforming into the lung cancer region is high, so that the upper left lobe lung cancer grade can be predicted to be stage IIA or stage IIB in the upper level II. If the ratio is greater than the preset threshold, the grade of lung cancer in the upper left lobe can be predicted to the next grade, stage IA. Wherein, in the case that the first disease is lung cancer, the preset threshold may be 0.2.
In an embodiment of the invention, a first volume of the slow obstructive lung over each lobe is derived from the first region of the slow obstructive lung, respectively, and a second volume of the small airway lesion over each lobe is derived from the second region of the small airway lesion, respectively. In the image, the characteristics of small airway lesions are not obvious, and the first volume and the second volume on each lung lobe are not easy to calculate, so that the lung lobe segmentation image can be subjected to gridding processing to respectively calculate the grid volumes of the first area and the second area after gridding, the sum of the grid volumes of the first area is the first volume, and the sum of the grid volumes of the second area is the second volume.
In an embodiment of the present invention, a specific method for performing meshing processing on the lung lobe segmentation image is as follows: determining the number of layers of the full inhalation phase lung image or the full exhalation phase lung image; respectively carrying out gridding treatment on the slow-obstruction lung area of each layer of each lung lobe and the small airway lesion area of each layer of each lung lobe to obtain a slow-obstruction lung area grid area of each layer of each lung lobe and a small airway lesion area grid area of each layer of each lung lobe, and respectively determining the volume (first volume) of each lung lobe slow-obstruction lung area and the volume (second volume) of each lung lobe small airway lesion area according to the slow-obstruction lung area grid area and the small airway lesion area grid area of each layer of each lung lobe, the number of layers, the scanning layer thickness and the layer spacing. Wherein the number of layers, the scanning layer thickness, and the layer spacing of the full inhalation phase lung image and the full exhalation phase lung image are the same.
In an embodiment of the present invention, the method for determining the volume of the region of each lobe of the lung in slow obstruction (first volume) is: and obtaining a first sub-volume formed by the two adjacent slow lung blocking regions by utilizing the corresponding grid areas, the interlayer spacing and the layer thickness of the two adjacent slow lung blocking regions of each lung lobe, and obtaining a first volume of the slow lung blocking regions of the lung lobes by utilizing the sum of the first sub-volumes formed by the two adjacent slow lung blocking regions of all the lung lobes. The structure formed by two adjacent layers of slow-resistance lung regions can be regarded as a prism table, the areas of the upper bottom surface and the lower bottom surface of the prism table are the grid areas corresponding to the slow-resistance lung regions, the height of the prism table can be determined by the interlayer spacing and the layer thickness, for example, the number of layers is N, the height of the prism table formed by the first layer of slow-resistance lung region and the second layer of slow-resistance lung region can be the sum value between the layer thickness 2 and the interlayer spacing, and the height of the prism table formed by the other two adjacent layers of slow-resistance lung regions can be the sum value between the layer thickness and the interlayer spacing. A first sub-volume of the slow lung region formed by adjacent layers may be determined based on the upper and lower floor areas and the height. The first volume may then be obtained using the sum of the first sub-volumes.
The method for determining the pathological area volume (second volume) of each pulmonary lobular airway is as follows: and obtaining a second sub-volume formed by the two adjacent layers of small airway lesion areas by utilizing the corresponding grid area, interlayer spacing and layer thickness of the two adjacent layers of small airway lesion areas of each lung lobe, and obtaining a second volume of the small airway lesion areas of the lung lobes by utilizing the sum of the second sub-volumes formed by the two adjacent layers of small airway lesion areas of the lung lobes. The structure formed by two adjacent layers of small airway lesion areas can be regarded as an edge table, the areas of the upper bottom surface and the lower bottom surface of the edge table are the grid areas corresponding to the small airway lesion areas, the height of the edge table can be determined by the layer spacing and the layer thickness, for example, the number of layers is N, the height of the edge table formed by the first layer and the second layer of small airway lesion areas can be the sum of the layer thickness 2 and the layer spacing, and the height of the edge table formed by the other two adjacent layers of small airway lesion areas can be the sum of the layer thickness and the layer spacing. A second subvolume of the small airway lesion field formed by adjacent layers can be determined based on the upper and lower floor areas and the height. The second volume may then be obtained using the sum of the second sub-volumes.
In the embodiment of the present invention, the specific method for obtaining the grid area of the slow obstruction lung area of each layer of each lung lobe by performing the grid processing on the slow obstruction lung area of each layer of each lung lobe includes: step 1: respectively determining the slow obstructive lung region edge of each lung lobe of each layer; respectively determining a first specification shape in a slow obstructive lung area of each lung lobe of each layer and a small airway lesion area of each lung lobe; step 2: the first gauge shape extends towards the edge of the chronic obstructive pulmonary region of each lobe of each layer; and step 3: when the set points of the specification shape contact the edges of the slow obstructive lung region of each lung lobe of each layer, the first specification shape stops extending and generates a second specification shape outside the specification shape; sequentially generating according to the steps 2 and 3 until the number of the specification shapes reaches the set number, and calculating the areas of all the specification shapes to obtain the grid area of the slow obstructive pulmonary region of each lung lobe of each layer; and the specification shapes, the second specification shapes and the specification shape areas corresponding to the set number are reduced in sequence.
In an embodiment of the present invention, the specific method for obtaining the grid area of the small airway lesion area of each layer of each lung lobe by performing the meshing process on the small airway lesion area of each layer of each lung lobe includes: step 1: respectively determining the small airway lesion area edge of each lung lobe of each layer; determining a first specification shape in the small airway lesion area of each lung lobe of each layer respectively; step 2: the first gauge shape extends to the edge of the small airway lesion area of each lung lobe of each layer; and step 3: when the set points of the specification shape contact the edges of the small airway lesion area of the left lung of each lung lobe of each layer, the first specification shape stops extending and generates a second specification shape outside the specification shape; sequentially generating according to the steps 2 and 3 until the number of the specification shapes reaches the set number, and calculating the areas of all the specification shapes to obtain the small airway lesion area grid area of each lung lobe of each layer; and the specification shapes, the second specification shapes and the specification shape areas corresponding to the set number are reduced in sequence.
Wherein the first gauge shape is preferably within the slow obstructive lung region of each lung lobe and at the geometric center of the small airway lesion region of said each lung lobe. The specification shape can be any one of circular, elliptical, rectangular and square. If the specification shape is selected to be circular, several set points, such as 20 set points, can be uniformly set on the circumference of the circle, and when the several set points of the specification shape contact the edge of the slow obstructive pulmonary region of the right lung or the edge of the small airway lesion region of the right lung, the first specification shape stops extending and generates a second specification shape outside the specification shape; the second specification shape can be any one of a circle, an ellipse, a rectangle and a square, but the second specification shape is smaller than the first specification shape to achieve the purpose of fine division. For the setting of the quantity value of the set quantity, the skilled person can set the quantity value according to the precision requirement.
In the chronic obstructive pulmonary disease and the small airway lesion, based on the grade of the chronic obstructive pulmonary disease, determining the trend grade of the first disease of the corresponding lung lobe according to the ratio of the first volume to the second volume of each lung lobe and a preset threshold value.
For example, on the upper left lobe, based on the level of the slow obstructive lung, a trend level of the slow obstructive lung for the upper left lobe may be determined from a ratio of the first volume of the upper left lobe and the second volume of the upper left lobe to a preset threshold.
The method for determining the trend grade of the first disease of the corresponding lung lobe according to the ratio of the first region and the second region of each lung lobe and a preset threshold respectively based on the grade of the first disease comprises the following steps: if the ratio is greater than the preset threshold, the trend grade is the next grade of the first disease; otherwise, the trend rating is the last rating of the first disease; wherein the previous grade is more severe than the grade of the first disease.
For example, on the upper left lobe, the grade of the slow obstructive lung is GOLD 2 grade (moderate), the ratio of the first volume of the upper left lobe slow obstructive lung region divided by the second volume of the upper left lobe small airway lesion region is smaller than or equal to the preset threshold, which indicates that although the upper left lobe slow obstructive lung region is small, the small airway lesion region is large, the probability of the small airway lesion region being converted into the slow obstructive lung region is high, and the grade of the slow obstructive lung of the upper left lobe can be predicted to be GOLD 3 grade (severe). If the ratio is greater than the predetermined threshold, it is considered that the drug treatment is possible, and the grade of the upper left-lobe slow-obstructive lung can be predicted to be the next grade GOLD 1 (mild). Wherein, in the case that the first disease is lung cancer, the preset threshold may be 0.4.
In addition, an embodiment of the present invention further provides a disease trend grade determining apparatus, and referring to fig. 4, the disease trend grade determining apparatus includes:
an obtaining module 10, configured to obtain a lung lobe segmentation image;
a first determining module 20, configured to determine a first disease and a second disease of a focal region of each lung lobe in the lung lobe segmentation image, where the second disease may be unidirectionally converted into the first disease;
a second determination module 30 for determining a trend rating of the first disease for each lung lobe based on the known rating of the first disease and the second disease, respectively.
Optionally, the second determining module 30 includes:
a positioning unit for positioning a first region of the first disease and a second region of the second disease of each lung lobe, respectively;
a determining unit, configured to determine a trend grade of the first disease of each lung lobe according to a ratio of the first region to the second region of each lung lobe, respectively, based on a known grade of the first disease.
Optionally, the determining unit is specifically configured to obtain a first volume of the first disease on each lobe of the lung according to the first region, and obtain a second volume of the second disease on each lobe of the lung according to the second region;
calculating a ratio of the first volume to the second volume for each lung lobe separately;
determining a trend grade of the first disease of the corresponding lung lobe according to a ratio of the first volume and the second volume of each lung lobe respectively based on the grade of the first disease.
Optionally, the determining unit is further specifically configured to determine, as the trend level of the first disease, a next level that is lighter than a known level of the first disease in response to a ratio of the first region to the second region being greater than a preset threshold;
determining a previous grade more severe than the known grade of the first disease as a trend grade of the first disease in response to a ratio of the first area to the second area being less than or equal to a preset threshold.
Optionally, the first disease of the focal region of each lung lobe in the lung lobe segmentation image is chronic obstructive lung, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is small airway lesion; or
The first disease of the focal region of each lung lobe in the lung lobe segmentation image is lung cancer, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is a lung nodule.
Further, the first determining module is further configured to input each lung lobe segmentation image into a preset neural network, so as to extract image features through a residual error network of the preset neural network, and determine a known level of the first disease according to a classification network of the preset neural network.
Optionally, the acquiring module is specifically configured to acquire a lung image, where the lung image includes a full inhalation phase lung image and a full exhalation phase lung image; and respectively carrying out lung lobe segmentation on the full inhalation phase lung image and the full exhalation phase lung image to obtain a first lung lobe segmentation image and a second lung lobe segmentation image.
Optionally, the first determining module 20 is specifically configured to determine the first disease according to the first lung lobe segmentation image or the second lung lobe segmentation image; and determining the second disease according to the first lung lobe segmentation image and the second lung lobe segmentation image.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium having a disease tendency level determination program stored thereon, which when executed by a processor implements the steps of the disease tendency level determination method as described below.
The embodiments of the disease trend grade determining apparatus and the computer-readable storage medium of the present invention can refer to the embodiments of the disease trend grade determining method of the present invention, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A disease trend rating determination method, comprising:
obtaining a lung lobe segmentation image;
determining a first disease and a second disease of a focus area of each lung lobe in the lung lobe segmentation image, wherein the second disease can be unidirectionally converted into the first disease;
determining a trend rating for the first disease for each lung lobe based on the known rating for the first disease and the second disease, respectively.
2. The method of claim 1, wherein said determining a trend rating for said first disease for each lung lobe based on a known rating for said first disease and said second disease, respectively, comprises:
locating a first region of the first disease and a second region of the second disease of each lobe, respectively;
determining a trend rating of the first disease for each lung lobe based on a known rating of the first disease from a ratio of the first region to the second region of each lung lobe, respectively.
3. The method of claim 2, wherein determining a trend rating of the first disease for each lung lobe based on the known rating of the first disease from a ratio of the first region to the second region for each lung lobe, respectively, comprises:
obtaining a first volume of the first disease on each lobe of the lung according to the first region, respectively, and obtaining a second volume of the second disease on each lobe of the lung according to the second region, respectively;
calculating a ratio of the first volume to the second volume for each lung lobe separately;
determining a trend grade of the first disease of the corresponding lung lobe according to a ratio of the first volume and the second volume of each lung lobe respectively based on the grade of the first disease.
4. The method of claim 2, wherein determining a trend rating of the first disease for each lung lobe based on the known rating of the first disease from a ratio of the first region to the second region for each lung lobe, respectively, comprises:
determining a next grade lighter than a known grade of the first disease as a trend grade of the first disease in response to a ratio of the first region to the second region being greater than a preset threshold;
determining a previous grade more severe than the known grade of the first disease as a trend grade of the first disease in response to a ratio of the first area to the second area being less than or equal to a preset threshold.
5. The method according to any one of claims 1 to 4, wherein the first disease of the focal region of each lung lobe in the lung lobe segmentation image is chronic obstructive lung, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is small airway lesion; or
The first disease of the focal region of each lung lobe in the lung lobe segmentation image is lung cancer, and the second disease of the focal region of each lung lobe in the lung lobe segmentation image is a lung nodule.
6. The method of any one of claims 1-4, wherein prior to determining the trend rating of the first disease for each lung lobe based on the known rating of the first disease and the second disease, respectively, the method further comprises:
and inputting each lung lobe segmentation image into a preset neural network to extract image features through a residual error network of the preset neural network, and determining the known grade of the first disease according to a classification network of the preset neural network.
7. The method according to any one of claims 1-4, wherein said obtaining a lung lobe segmentation image comprises:
acquiring lung images, wherein the lung images comprise full inhalation phase lung images and full exhalation phase lung images;
respectively carrying out lung lobe segmentation on the full inhalation phase lung image and the full exhalation phase lung image to obtain a first lung lobe segmentation image and a second lung lobe segmentation image;
the determining the first disease and the second disease of the focal region of each lung lobe in the lung lobe segmentation image comprises:
determining the first disease according to the first lung lobe segmentation image or the second lung lobe segmentation image;
and determining the second disease according to the first lung lobe segmentation image and the second lung lobe segmentation image.
8. A disease tendency level determination device, characterized by comprising:
the acquisition module is used for acquiring a lung lobe segmentation image;
the first determining module is used for determining a first disease and a second disease of a focus area of each lung lobe in the lung lobe segmentation image, and the second disease can be unidirectionally converted into the first disease;
a second determination module to determine a trend rating of the first disease for each lung lobe based on the known rating of the first disease and the second disease, respectively.
9. A disease tendency level determination device characterized by comprising: a memory, a processor and a disease trend level determination program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the disease trend level determination method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a disease trend grade determination program is stored on the computer-readable storage medium, which when executed by a processor implements the steps of the disease trend grade determination method according to any one of claims 1 to 7.
CN202010260500.4A 2020-04-03 2020-04-03 Disease trend grade determining method, device, equipment and storage medium Pending CN111326259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010260500.4A CN111326259A (en) 2020-04-03 2020-04-03 Disease trend grade determining method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010260500.4A CN111326259A (en) 2020-04-03 2020-04-03 Disease trend grade determining method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111326259A true CN111326259A (en) 2020-06-23

Family

ID=71173366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010260500.4A Pending CN111326259A (en) 2020-04-03 2020-04-03 Disease trend grade determining method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111326259A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927217A (en) * 2021-03-23 2021-06-08 内蒙古大学 Thyroid nodule invasiveness prediction method based on target detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927217A (en) * 2021-03-23 2021-06-08 内蒙古大学 Thyroid nodule invasiveness prediction method based on target detection
CN112927217B (en) * 2021-03-23 2022-05-03 内蒙古大学 Thyroid nodule invasiveness prediction method based on target detection

Similar Documents

Publication Publication Date Title
CN111640100B (en) Tumor image processing method and device, electronic equipment and storage medium
KR101874348B1 (en) Method for facilitating dignosis of subject based on chest posteroanterior view thereof, and apparatus using the same
EP3021753B1 (en) Systems and methods for determining hepatic function from liver scans
CN113610845B (en) Construction method and prediction method of tumor local control prediction model and electronic equipment
US8290568B2 (en) Method for determining a property map of an object, particularly of a living being, based on at least a first image, particularly a magnetic resonance image
CN111242931B (en) Method and device for judging small airway lesions of single lung lobes
EP3959684A1 (en) Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
US20080107323A1 (en) Computer Diagnosis of Malignancies and False Positives
CN112561908B (en) Mammary gland image focus matching method, device and storage medium
Chang et al. Histopathological correlation of 11C-choline PET scans for target volume definition in radical prostate radiotherapy
CN111340825A (en) Method and system for generating mediastinal lymph node segmentation model
Papież et al. Liver motion estimation via locally adaptive over-segmentation regularization
CN111429447A (en) Focal region detection method, device, equipment and storage medium
CN111681205B (en) Image analysis method, computer device, and storage medium
CN114266729A (en) Chest tumor radiotherapy-based radiation pneumonitis prediction method and system based on machine learning
CN114758175A (en) Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images
US11935234B2 (en) Method for detecting abnormality, non-transitory computer-readable recording medium storing program for detecting abnormality, abnormality detection apparatus, server apparatus, and method for processing information
CN111326259A (en) Disease trend grade determining method, device, equipment and storage medium
CN111429446A (en) Lung image processing method, device, equipment and storage medium
KR20200057563A (en) Method, device and program for lung image registration for histogram analysis of lung movement and method and program for analysis of registered lung image
CN111477324A (en) Method, device, equipment and storage medium for emphysema disease prediction
CN103356218B (en) Method and system for X-ray computed tomography
CN114375461A (en) Inspiratory metrics for chest X-ray images
CN111415341A (en) Pneumonia stage evaluation method, pneumonia stage evaluation device, pneumonia stage evaluation medium and electronic equipment
CN111326227A (en) Case report generation method, case report generation device, case report generation equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination