WO2021155829A1 - 一种基于医学影像的诊断信息处理方法、装置及存储介质 - Google Patents

一种基于医学影像的诊断信息处理方法、装置及存储介质 Download PDF

Info

Publication number
WO2021155829A1
WO2021155829A1 PCT/CN2021/075379 CN2021075379W WO2021155829A1 WO 2021155829 A1 WO2021155829 A1 WO 2021155829A1 CN 2021075379 W CN2021075379 W CN 2021075379W WO 2021155829 A1 WO2021155829 A1 WO 2021155829A1
Authority
WO
WIPO (PCT)
Prior art keywords
lung
volume
medical image
affected part
image
Prior art date
Application number
PCT/CN2021/075379
Other languages
English (en)
French (fr)
Inventor
石磊
臧璇
史晶
Original Assignee
杭州依图医疗技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010081111.5A external-priority patent/CN111261284A/zh
Priority claimed from CN202010083597.6A external-priority patent/CN111261285A/zh
Priority claimed from CN202010096657.8A external-priority patent/CN111160812B/zh
Application filed by 杭州依图医疗技术有限公司 filed Critical 杭州依图医疗技术有限公司
Priority to US17/760,185 priority Critical patent/US20230070249A1/en
Priority to EP21751295.3A priority patent/EP4089688A4/en
Publication of WO2021155829A1 publication Critical patent/WO2021155829A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • the present invention relates to the field of Internet technology, and in particular to a method, device and storage medium for processing diagnostic information based on medical images.
  • the invention provides a diagnostic information processing method based on medical images, which is used for realizing the classification of diseases based on the medical images.
  • the present invention provides a diagnostic information processing method based on medical imaging, including:
  • the disease level of the lung of the subject corresponding to the first lung medical image information is determined according to the image parameters of the affected part.
  • the beneficial effect of the present invention is that the image parameters of the affected part in the first lung medical image can be obtained, and then the lungs of the subject corresponding to the first lung medical image information can be determined according to the image parameters of the affected part
  • the level of the disease can be classified based on medical images.
  • the acquiring the image parameters of the affected part in the first lung medical image includes:
  • At least one first lung medical image is input into the neuron network to determine the volume of the affected part in the first lung medical image.
  • the neuron network includes:
  • Inputting at least one first lung medical image into the neuron network to determine the volume of the affected part in the first lung medical image includes:
  • the patch shadow information is passed through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
  • the beneficial effect of this embodiment is that the neural network formed by connecting multiple models can realize patch shadow detection and volume calculation at the same time, which simplifies the method for determining the volume of the affected part.
  • determining the disease level of the subject’s lungs corresponding to the first lung medical imaging information according to the imaging parameters of the affected part includes:
  • the disease level of the lung of the subject is determined.
  • determining the disease level of the subject’s lungs corresponding to the first lung medical imaging information according to the imaging parameters of the affected part includes:
  • the volume of the affected part and the volume of the affected part in the lung are input into the disease grade calculation model to obtain the disease grade calculation model based on the volume of the affected part and the volume of the affected part in the lung.
  • the disease grade of the lungs of the subject obtained by comprehensive calculation.
  • the method further includes:
  • the development trend information of the lung disease of the subject is determined according to the change trend of the volume of the affected part.
  • the beneficial effect of this embodiment is that the volume change trend of the affected part can be judged based on different lung medical images of the same subject, so that the development trend information of the lung disease of the subject can be automatically determined by the volume change trend of the affected part.
  • determining the development trend of the lung disease of the subject according to the change trend of the volume of the affected part includes:
  • the second diagnosis result of the subject is determined.
  • the method further includes:
  • the disease development speed of the subject is calculated according to the generation time and the volume change trend of the affected part.
  • the method further includes:
  • the method further includes:
  • This application also provides a method for displaying a diagnostic information interface, including:
  • the forming a first pattern based on the first data includes:
  • the first data is determined.
  • the second data is reference data of the region of interest in the CT image.
  • the second data is CT value density data of the region of interest in the second target CT image acquired at a different time from the first target CT image.
  • the region of interest is included in at least one of the following regions:
  • a diagnostic information interaction method based on medical images including:
  • the disease level of the lung of the subject corresponding to the first lung medical image information is output.
  • This application also provides a diagnostic information evaluation method based on medical imaging, including:
  • the evaluation of the region of interest according to the score of each partition includes:
  • a corresponding score threshold is set, and then based on the score threshold, the severity of the disease of the subject corresponding to the medical image is determined.
  • partitioning the region of interest in the medical image includes:
  • the region of interest is a human lung
  • the N partitions are the right upper lobe, the right middle lung, the right lower lobe, the left upper lobe, and the left lower lung leaf.
  • partitioning the region of interest in the medical image includes:
  • the region of interest is a human lung
  • the N partitions are divided into six partitions from top to bottom for left and right lungs of the human lung.
  • the first sign is a patchy area
  • the second sign is a ground glass area
  • obtaining the corresponding scores of the volume proportions of the first sign and the second sign, and obtaining the scores of each zone based on the scores includes:
  • the sum of the first product and the second product is the score of the corresponding partition of the first sign and the second sign.
  • the evaluating the region of interest according to the score of each partition includes:
  • the score is less than a first threshold, it is determined that the subject corresponding to the medical image is mild pneumonia
  • the score is greater than or equal to the first threshold and less than the second threshold, it is determined that the subject corresponding to the medical image is moderate pneumonia;
  • the score is greater than or equal to the second threshold, it is determined that the subject corresponding to the medical image is severe pneumonia.
  • This application also provides a diagnostic information evaluation method based on medical imaging, which is characterized in that it includes:
  • outputting the disease level of the subject's lungs corresponding to the first lung medical imaging information according to the imaging parameters of the affected part includes:
  • the volume of the affected part and the volume of the affected part in the lung are input into the disease grade calculation model to obtain the disease grade calculation model based on the volume of the affected part and the volume of the affected part in the lung.
  • the disease grade of the lungs of the subject obtained by comprehensive calculation.
  • This application also provides a method for displaying diagnostic information based on medical imaging, including:
  • the diagnostic information includes at least one of the following:
  • the proportion of the volume of the first sign and the second sign, the score obtained based on the volume of the first sign and the second sign, and the evaluation result of the medical image obtained based on the score is the proportion of the volume of the first sign and the second sign, the score obtained based on the volume of the first sign and the second sign, and the evaluation result of the medical image obtained based on the score.
  • the present invention also provides a diagnostic information processing device based on medical images, including:
  • the first acquisition module is used to acquire the first lung medical image of the subject
  • the second acquisition module is used to acquire the image parameters of the affected part in the first lung medical image
  • the determining module is configured to determine the disease level of the lung of the subject corresponding to the first lung medical image information according to the image parameters of the affected part.
  • the second acquisition module includes:
  • the input sub-module is used to input at least one first lung medical image into the neuron network to determine the volume of the affected part in the first lung medical image.
  • the neuron network includes:
  • Input submodule for:
  • the patch shadow information is passed through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
  • the determining module includes:
  • the comparison sub-module is used to compare the volume of the affected part with a target relationship table, wherein the corresponding relationship between the volume of the affected part and the disease level is stored in the target relationship table;
  • the first determining sub-module is used to determine the disease level of the lung of the subject according to the comparison result.
  • the determining module includes:
  • the calculation sub-module is used to calculate the proportion of the volume of the affected part in the lung
  • the input sub-module is used to input the volume of the affected part and the proportion of the volume of the affected part in the lung into the disease grade calculation model to obtain the disease grade calculation model based on the volume of the affected part and the affected part
  • the disease grade of the subject’s lungs is obtained by comprehensively calculating the proportion of the volume in the lungs.
  • the device further includes:
  • the third acquisition module is used to acquire the second lung medical image of the subject
  • the fourth acquisition module is used to acquire the volume of the affected part in the second lung medical image
  • the comparison module is used to compare the volume of the affected part in the second lung medical image with the volume of the affected part in the first lung medical image to determine the volume change trend of the affected part;
  • the change trend determination module is used to determine the development trend information of the lung disease of the subject according to the change trend of the volume of the affected part.
  • the change trend determination module includes:
  • the second determining sub-module is used to determine the first diagnosis result of the subject when the volume of the affected part meets the first development trend
  • the third determining sub-module is used to determine the second diagnosis result of the subject when the volume of the affected part meets the second development trend.
  • the device further includes:
  • the fifth acquisition module is used to acquire the generation time of the first lung medical image and the second lung medical image
  • the calculation module is used to calculate the disease development speed of the subject according to the generation time and the volume change trend of the affected part.
  • the device further includes:
  • the first rendering module is configured to render the first lung medical image based on a single color to generate a third lung medical image, wherein the rendered color depth is positively correlated with the CT value;
  • the second rendering module is configured to render the first lung medical image based on multiple colors to generate a fourth lung medical image, wherein different CT values are rendered by different types of colors;
  • the first output module is used to output the first lung medical image, the third lung medical image and/or the fourth lung medical image.
  • the device further includes:
  • the third rendering module is used to render multiple lung medical images with multiple colors, and the parts with different CT values and/or CT value ranges in the rendered lung medical images correspond to different colors;
  • the second output module is used to output multiple rendered lung medical images.
  • the present application also provides a non-transitory readable storage medium.
  • the instructions in the storage medium are executed by the processor in the device, the device can execute the method involved in any of the foregoing embodiments.
  • FIG. 1A is a flowchart of a method for processing diagnostic information based on medical imaging in an embodiment of the present invention
  • FIG. 1B is a schematic diagram of marking the lung area in a medical image by dividing lines.
  • 2A is a flowchart of a method for processing diagnostic information based on medical imaging in another embodiment of the present invention.
  • Fig. 2B is a schematic diagram of the interface of the system implementing the solution provided by the present invention.
  • 3A is a flowchart of a method for processing diagnostic information based on medical imaging in another embodiment of the present invention.
  • Figure 3B is a schematic diagram of the development trend assessment of different courses of new coronavirus pneumonia
  • Figure 3C is a comparison diagram of the first lung medical image and the lung medical image rendered in different ways
  • Figure 3D is a schematic diagram of the comparison between the CT value of normal lungs and the distribution of CT values of lungs with specific diseases
  • FIG. 4 is a block diagram of a diagnostic information processing device based on medical images in an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for displaying a diagnostic information interface in an embodiment of the present invention
  • Fig. 6 is a schematic diagram of comparison between the first graph and the second graph
  • FIG. 7 is a flowchart of a diagnostic information interaction method based on medical imaging in an embodiment of the present invention.
  • FIG. 8 is a flowchart of a method for evaluating diagnostic information based on medical images in an embodiment of the present invention.
  • Figure 9 is a schematic diagram of the distribution of human lung segments in medical images
  • Figure 10 is a schematic diagram of dividing the human lung into six partitions by dividing lines
  • FIG. 11 is a flowchart of a method for evaluating diagnostic information based on medical images in an embodiment of the present invention.
  • FIG. 12 is a flowchart of a method for evaluating diagnostic information based on medical images in an embodiment of the present invention.
  • FIG. 13 is a flowchart of a method for evaluating diagnostic information based on medical imaging in an embodiment of the present invention.
  • FIG. 14 is a flowchart of a method for displaying diagnostic information based on medical imaging in an embodiment of the present invention.
  • Figure 15 is the new coronavirus pneumonia evaluation interface
  • FIG. 16 is a flowchart corresponding to a general embodiment in an embodiment of the present invention.
  • FIG. 1A is a flowchart of a method for processing diagnostic information based on medical imaging in an embodiment of the present invention. As shown in FIG. 1A, the method can be implemented as the following steps S11-S13:
  • step S11 the first lung medical image of the subject is acquired
  • step S12 image parameters of the affected part in the first lung medical image are acquired
  • step S13 the disease level of the lung of the subject corresponding to the first lung medical image information is determined according to the image parameters of the affected part.
  • the first lung medical image of the subject is acquired; the first lung medical image may be a CT image of the subject’s chest.
  • the lung area has been marked. Specifically, it can be This is achieved by manual labeling.
  • a step of segmenting the lung region can also be included.
  • the chest medical image is input into a pre-trained neuron network for segmenting the lung region, so that the chest region can be analyzed through the neuron network.
  • the lung regions in medical images are identified and labeled. Specifically, after the lungs are identified through the neuron network, the lungs are labeled with segmentation lines. As shown in Figure 1B, the lungs are labeled with black segmentation lines.
  • segmentation line can also be in other colors.
  • the lung area in the chest image can be labeled, so as to obtain the first lung medical image.
  • this segmentation step can also be used The user verifies the accuracy of the segmentation result.
  • the CT value of the affected area in this medical image is different from the CT value of the normal lung area.
  • involvement refers to the function or organic change of a certain organ or tissue caused by disease
  • the affected part refers to the part of the function or organic change caused by the disease.
  • the CT value of the affected area in this medical image is different from the CT value of the normal lung area.
  • involvement refers to the function or organic change of a certain organ or tissue caused by disease
  • the affected part refers to the part of the function or organic change caused by the disease.
  • CT chest imaging can display and characterize the corresponding lesions through the images of the affected parts, such as the lungs infected by the coronavirus, such as the new coronavirus, 2019-nCoV virus, and so on.
  • the coronavirus such as the new coronavirus, 2019-nCoV virus, and so on.
  • the image parameters of the affected part in the first lung medical image Specifically, at least one first lung medical image can be input into the neuron network to determine the image parameters of the affected part in the first lung medical image , Usually, the volume of the affected part is included in the image parameters.
  • the disease level of the subject’s lungs corresponding to the first lung medical imaging information can be determined by the following methods :
  • a relationship table is created in advance, and the relationship table contains the corresponding relationship between the volume of the affected part and the disease level.
  • the volume of the affected part can be compared with the target relationship table, where the corresponding relationship between the volume of the affected part and the disease level is stored in the target relationship table; the disease level of the lung of the subject is determined according to the comparison result.
  • the beneficial effect of the present invention is that the image parameters of the affected part in the first lung medical image can be obtained, and then the disease level of the subject’s lung corresponding to the first lung medical image information can be determined according to the image parameters of the affected part, so as to be able to Classification of diseases based on medical images.
  • step S12 may be implemented as the following steps:
  • At least one first lung medical image is input into the neuron network to determine the volume of the affected part in the first lung medical image.
  • the neural network includes:
  • the normal CT value distribution interval in the lung, the CT value distribution interval of the affected part and at least one first lung medical image are input into the neuron network to determine the volume of the affected part in the first lung medical image. It is implemented as the following steps A1-A6:
  • step A1 pass at least one first lung medical image through N continuous convolution feature extraction modules in the first detection model, so that the N continuous convolution feature extraction modules obtain the first lung medical image
  • N is a positive integer
  • step A2 the image features of the affected part in the first lung medical image are input to the fully connected layer in the first detection model, so that the fully connected layer outputs candidate patch shadows based on the image features;
  • step A3 pass the candidate patch shadow through the cutting model, so that the cutting model cuts the candidate patch shadow multiple times in different directions in space to obtain multiple slices of the candidate patch shadow in multiple directions in space image;
  • step A4 the multiple continuous slice images are passed through the M continuous convolution feature extraction modules in the second detection model, so that the M continuous convolution feature extraction modules obtain the image features of the slice images, where, M is a positive integer;
  • step A5 the image features of the slice image are input to the fully connected layer in the second detection model, so that the fully connected layer outputs patchy shadow information based on the image features;
  • step A6 the patch shadow information is passed through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
  • the neuron network is formed by connecting multiple models, where the neuron network includes a first detection model for detecting candidate patch shadows, a cutting model, and a patch shadow interval for detecting The second detection model and the volume calculation model used to calculate the volume of the affected part.
  • the first detection model includes an input layer, N consecutive convolution feature extraction modules, a fully connected layer, and an output layer.
  • the convolution feature extraction module includes multiple convolution modules.
  • the convolution module includes a convolution layer, BN layer and incentive layer.
  • the second detection model has the same structure as the first detection model, and will not be repeated here.
  • the first The image features output by one convolution feature extraction module and the second convolution feature extraction module are added together as the input of the third convolution feature extraction block.
  • the first The image features output by one convolution feature extraction module and the second convolution feature extraction module are added together as the input of the third convolution feature extraction block.
  • the number M of convolution feature extraction modules in the second detection model in the above steps may be equal to the number N of convolution feature extraction modules in the first detection model, or may not be equal to N.
  • the beneficial effect of this embodiment is that the neural network formed by connecting multiple models can realize patch shadow detection and volume calculation at the same time, which simplifies the method for determining the volume of the affected part.
  • step S13 can be implemented as the following steps S21-S22:
  • step S21 the volume of the affected part is compared with the target relationship table, where the corresponding relationship between the volume of the affected part and the disease level is stored in the target relationship table;
  • step S22 the disease level of the lungs of the subject is determined according to the comparison result.
  • a relationship table is created in advance, and the relationship table contains the corresponding relationship between the volume of the affected part and the disease level.
  • the volume of the affected part can be compared with the target relationship table, where the corresponding relationship between the volume of the affected part and the disease level is stored in the target relationship table; the disease level of the lung of the subject is determined according to the comparison result.
  • step S13 can be implemented as the following steps B1-B2:
  • step B1 calculate the proportion of the volume of the affected part in the lung
  • step B2 the volume of the affected part and the volume proportion of the affected part in the lung are input into the disease grade calculation model to obtain the disease grade calculation model.
  • a comprehensive calculation based on the volume of the affected part and the volume proportion of the affected part in the lung The level of the disease in the lungs of the subject.
  • the volume percentage of the affected part in the lung is calculated; the volume of the affected part and the volume percentage of the affected part in the lung are input into the disease grade calculation model to obtain the disease grade calculation model based on the volume of the affected part and
  • the disease grade of the subject’s lungs is calculated by comprehensively calculating the proportion of the volume of the affected part in the lung.
  • the volume percentage of the specific affected part in the lung can also be calculated by the pre-trained volume percentage calculation model. After the medical image is input into the volume percentage calculation model, the model can automatically give the volume percentage of each CT interval.
  • Figure 2B is a schematic diagram of the interface of the system implementing the solution provided by the present invention. As shown in Figure 2B, the volume of the affected area calculated by the volume percentage calculation model is compared with the two lung volume analysis column of the interface diagram. .
  • the method may also be implemented as the following steps S31-S34:
  • step S31 a second lung medical image of the subject is acquired
  • step S32 the volume of the affected part in the second lung medical image is obtained
  • step S33 the volume of the affected part in the second lung medical image is compared with the volume of the affected part in the first lung medical image to determine the volume change trend of the affected part;
  • step S34 the development trend information of the lung disease of the subject is determined according to the change trend of the volume of the affected part.
  • a second lung medical image of the subject is acquired.
  • the second lung medical image and the first lung medical image in the previous embodiment are lung medical images of the same subject in different periods. Compare the volume of the affected part in the second lung medical image with the volume of the affected part in the first lung medical image to determine the volume change trend of the affected part; determine the development of the subject's lung disease according to the volume change trend of the affected part Trend information.
  • the subject's disease situation will increase or decrease over time. Therefore, the development trend information of the subject's lung disease can be determined based on lung medical images of different time periods. Specifically, the ID of the subject is first obtained, and the second lung medical image of the subject is obtained by the ID of the subject. The second lung medical image may be generated earlier than the first lung medical image. , It can also be later than the first lung medical image, as long as the first lung medical image and the second lung medical image are generated at different times. In addition, considering that the time span is too small, the disease situation does not change significantly. Therefore, the first lung medical image The interval between the generation time of the first lung medical image and the second lung medical image is not less than a certain value, such as 48 hours.
  • Figure 3B is a schematic diagram of the evaluation of novel coronavirus pneumonia.
  • the schematic diagram contains the comparison results of the first lung medical image and the second lung medical image.
  • the second lung of the subject is obtained After performing the medical imaging, obtain the volume of the affected part in the second lung medical image, and then compare the volume of the affected part in the second lung medical image with the volume of the affected part in the first lung medical image to determine the involvement
  • the volume change trend of the part, and the development trend information of the subject's lung disease is determined according to the volume change trend of the affected part.
  • FIG. 3B from the new pneumonia assessment interface on the right side of the figure, it can be seen that the volume of the affected part of the right lung has decreased from 20% to 10%, and the volume of the affected part of the left lung has decreased from 30% to 20%, that is, the affected area.
  • the volume decreases with time, and it is determined that the lung disease of the subject is reduced. It is understandable that if the volume of the affected part increases with time, it is determined that the subject's lung disease is worsening.
  • the volume change trend of the affected part can be expressed in a more intuitive way.
  • the arrow indicates the change trend of the volume of the affected part
  • the arrow combined with the specific value indicates the change trend of the volume of the affected part. Of course, it can also be expressed in other ways. I will not repeat them here.
  • the beneficial effect of this embodiment is that the volume change trend of the affected part can be judged based on different lung medical images of the same subject, so that the development trend information of the lung disease of the subject can be automatically determined by the volume change trend of the affected part.
  • step S34 can be implemented as the following steps C1-C2:
  • step C1 when the volume of the affected part meets the first development trend, the first diagnosis result of the subject is determined
  • step C2 when the volume of the affected part meets the second development trend, the second diagnosis result of the subject is determined.
  • the generation time of the first lung medical image is later than the second lung medical image
  • the volume of the affected part in the first lung medical image is smaller than the volume of the affected part in the second lung medical image
  • the volume of the affected part decreases.
  • the generation time of the first lung medical image is earlier than the second lung medical image
  • the volume of the affected part in the first lung medical image is greater than the volume of the affected part in the second lung medical image
  • the affected part The volume is reduced.
  • the volume of the affected part decreases, it is determined that the subject's first diagnosis result, that is, the subject's disease state is reducing.
  • the affected part Assuming that the generation time of the first lung medical image is later than the second lung medical image, then when the volume of the affected part in the first lung medical image is greater than the volume of the affected part in the second lung medical image, then the affected part The volume increases. Assuming that the generation time of the first lung medical image is earlier than the second lung medical image, then when the volume of the affected part in the first lung medical image is smaller than the volume of the affected part in the second lung medical image, then the affected part The volume increases. When the volume of the affected part increases, the second diagnosis result of the subject is determined, that is, the subject's condition is getting worse.
  • the method can also be implemented as the following steps D1-D2:
  • step D1 obtain the generation time of the first lung medical image and the second lung medical image
  • step D2 the disease development speed of the subject is calculated according to the generation time and the volume change trend of the affected part.
  • the generation time of the first lung medical image and the second lung medical image can be obtained, and the generation time interval of the first lung medical image and the second lung medical image can be determined according to the generation time, and then based on the time interval Calculate the volume change range of the affected part per unit time with the volume change range of the affected part, so as to obtain the disease development speed of the subject.
  • the method can also be implemented as the following steps E1 and/or E2-E3:
  • step E1 the first lung medical image is rendered based on a single color to generate a third lung medical image, wherein the rendered color depth is positively correlated with the CT value;
  • step E2 the first lung medical image is rendered based on multiple colors to generate a fourth lung medical image, wherein different CT values are rendered by different types of colors;
  • step E3 the first lung medical image, the third lung medical image and/or the fourth lung medical image are output.
  • the volume of the lesion can be displayed according to the CT value interval selected by the user and displayed in the form of "rendering".
  • the first lung is based on a single color.
  • the medical image is rendered to generate the third lung medical image, where the rendered color depth is positively correlated with the CT value; then the first lung medical image is rendered based on multiple colors to generate the fourth lung medical image , Where different CT values are rendered with different types of colors; and then output the first lung medical image, the third lung medical image, and the fourth lung medical image.
  • the specific output picture format can be shown in Figure 3C.
  • the left side is the first lung medical image of the subject.
  • the first lung medical image is a chest CT image containing the lungs, and the middle
  • the first lung medical image is rendered in one color, and different CT values use different depths.
  • the higher the CT value the darker the color.
  • the higher the CT value the lighter the color.
  • the cross-sectional view on the right is marked with changing colors. For example, a plurality of CT value intervals may be set, the area falling in the low CT value interval is rendered in blue, and the area falling in the high CT value interval is rendered in red.
  • step E3 only the first lung medical image and the third lung medical image can be output, or only the first lung medical image and the fourth lung medical image can be output, or the first lung medical image can be output at the same time.
  • One lung medical image, the third lung medical image, and the fourth lung medical image are only the first lung medical image and the fourth lung medical image.
  • the method can also be implemented as the following steps F1-F2:
  • step F1 multiple lung medical images are rendered using multiple colors, and parts of the rendered lung medical images with different CT values and/or CT value ranges correspond to different colors;
  • step F2 multiple rendered lung medical images are output.
  • the lung medical images of the same patient with different courses can be rendered to enhance the comparison effect.
  • the lung medical images of the same subject for three consecutive days can be rendered through multiple colors.
  • the parts with different CT values and/or CT value ranges in the lung medical image correspond to different colors, and then multiple rendered lung medical images are output. Therefore, the black and white CT image is rendered into a color image, thereby enhancing the effect of the image, and obtaining rendered medical images of the lungs of the same subject with different courses of disease, which facilitates the comparison of the disease states of different courses.
  • a comparison diagram of the distribution of normal lung CT values and specific disease lung CT values can be given.
  • new coronavirus pneumonia you can analyze the chest CT of a large number of healthy people.
  • the lung CT value data of the normal population is given as a baseline reference, and a histogram is drawn, and the joint intersection of the CT value distribution of the healthy population and the patient, the Hellinger coefficient, etc., are provided for doctors to compare.
  • the specific comparison diagram is shown in Figure 3D .
  • the CT histogram with large changes is the histogram corresponding to the new type of coronavirus pneumonia, which can accurately and quickly assess the severity of the current new type of coronavirus pneumonia.
  • Fig. 4 is a block diagram of a medical image-based diagnostic information processing device, or display device, or interactive device in an embodiment of the present invention. As shown in Fig. 4, the device includes:
  • the first acquiring module 41 is used to acquire the first lung medical image of the subject
  • the second acquiring module 42 is used to acquire the image parameters of the affected part in the first lung medical image
  • the determining module 43 is configured to determine the disease level of the lung of the subject corresponding to the first lung medical image information according to the image parameters of the affected part.
  • the second acquisition module includes:
  • the input sub-module is used to input at least one first lung medical image into the neuron network to determine the volume of the affected part in the first lung medical image.
  • the neuron network includes:
  • Input submodule for:
  • the patch shadow information is passed through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
  • the determining module includes:
  • the comparison sub-module is used to compare the volume of the affected part with a target relationship table, wherein the corresponding relationship between the volume of the affected part and the disease level is stored in the target relationship table;
  • the first determining sub-module is used to determine the disease level of the lung of the subject according to the comparison result.
  • the determining module includes:
  • the calculation sub-module is used to calculate the proportion of the volume of the affected part in the lung
  • the input sub-module is used to input the volume of the affected part and the proportion of the volume of the affected part in the lung into the disease grade calculation model to obtain the disease grade calculation model based on the volume of the affected part and the affected part
  • the disease grade of the subject’s lungs is obtained by comprehensively calculating the proportion of the volume in the lungs.
  • the device further includes:
  • the third acquisition module is used to acquire the second lung medical image of the subject
  • the fourth acquisition module is used to acquire the volume of the affected part in the second lung medical image
  • the comparison module is used to compare the volume of the affected part in the second lung medical image with the volume of the affected part in the first lung medical image to determine the volume change trend of the affected part;
  • the change trend determination module is used to determine the development trend information of the lung disease of the subject according to the change trend of the volume of the affected part.
  • the change trend determination module includes:
  • the second determining sub-module is used to determine the first diagnosis result of the subject when the volume of the affected part meets the first development trend
  • the third determining sub-module is used to determine the second diagnosis result of the subject when the volume of the affected part meets the second development trend.
  • the device further includes:
  • the fifth acquisition module is used to acquire the generation time of the first lung medical image and the second lung medical image
  • the calculation module is used to calculate the disease development speed of the subject according to the generation time and the volume change trend of the affected part.
  • the device further includes:
  • the first rendering module is configured to render the first lung medical image based on a single color to generate a third lung medical image, wherein the rendered color depth is positively correlated with the CT value;
  • the second rendering module is configured to render the first lung medical image based on multiple colors to generate a fourth lung medical image, wherein different CT values are rendered by different types of colors;
  • the first output module is used to output the first lung medical image, the third lung medical image and/or the fourth lung medical image.
  • the device further includes:
  • the third rendering module is used to render multiple lung medical images with multiple colors, and the parts with different CT values and/or CT value ranges in the rendered lung medical images correspond to different colors;
  • the second output module is used to output multiple rendered lung medical images.
  • Fig. 5 is a flowchart of a method for displaying a diagnostic information interface in an embodiment of the present invention. As shown in Fig. 5, the method can be implemented as the following steps S51-S53:
  • a first graph is formed based on the first data, where the first graph is represented by a first color, and the first data is CT value density data of the region of interest in the first target CT image;
  • step S52 a second graphic is formed based on the second data; wherein the second graphic is represented by a second color;
  • step S53 the overlapping part of the first figure and the second figure is determined, and the overlapping part is indicated by the third color.
  • the first pattern is formed based on the first data.
  • the first data can be obtained in the following manner: in response to acquiring CT value density data of the region of interest in the first target CT image, determining the first data . Then, a second pattern is formed based on the second data.
  • the first graph and the second graph are the histograms corresponding to the CT value density data, as shown in Figure 3D or Figure 6, after the first graph and the second graph are formed, the first graph and the second graph can be placed In the same coordinate system, a comparison graph of the probability distribution of the CT value of the first target CT image and other data is formed, so that the first graph and the second graph can more intuitively represent the disease in the first target CT image severity.
  • the histograms in the embodiments involved in the present disclosure can be constructed based on 3D medical images, for example, based on the mass points of corresponding parts in the three-dimensional CT chest image. Therefore, the histograms involved in the present disclosure can be defined as 3D-CT value histograms.
  • the beneficial effects of the present application are: forming a first pattern based on the first data, and forming a second pattern based on the second data; wherein the first data is the CT value density data of the region of interest in the first target CT image, so that the The first data is compared with the second data.
  • the second data is the normal CT value density data of the corresponding part of the CT image, it is convenient for the user to judge the severity of the disease based on the comparison of the first graph and the second graph. Therefore, The scheme can make the manifestation of the severity of the disease more intuitive.
  • forming the first pattern based on the first data includes:
  • the first data is determined.
  • the second data is reference data of the region of interest in the CT image.
  • the second data is the benchmark data of the region of interest in the CT image.
  • the benchmark data can be data defined by the doctor as the benchmark data, the standard data in the industry, or the average of normal people.
  • Data for example, if the first data is lung CT density data of a patient with a lung disease (such as a patient with new coronavirus pneumonia), then the second data can be user-defined data or industry standard data It can also be the average data of a normal person, or it can be the lung CT value density data of the lung disease patient in other time periods (such as before suffering from the disease or after healed).
  • the second data is the average data of a normal person
  • the lower the similarity between the first graph and the second graph the higher the severity of the disease of the subject corresponding to the first target CT image, and the greater the similarity.
  • High means that the severity of the subject’s disease is lower, and when the similarity between the first pattern and the second pattern is greater than a certain value (for example, 95%), it can be considered that the subject is not ill or has healed .
  • the second data is CT value density data of the region of interest in the second target CT image acquired at a different time from the first target CT image.
  • the second data is the CT value density data of the region of interest in the second target CT image acquired at a different time from the first target CT image, for example, for the same subject, acquiring the subject’s data in different periods
  • the CT value density data can thereby more intuitively express the development trend of the subject's lung disease.
  • the region of interest is included in at least one of the following regions:
  • the region to be processed is outlined in the form of boxes, circles, ellipses, irregular polygons, etc. from the processed image, which is called the region of interest.
  • the region of interest may include In at least one of the following areas:
  • the human lung organ can be outlined by a shape that fits the human lung organ completely.
  • the human lung organ outlined by the black irregular polygon is the region of interest, so that the subsequent algorithm can Focus on the region of interest to reduce the amount of calculation in subsequent processing steps.
  • FIG. 7 is a flowchart of a method for interactive diagnosis information based on medical imaging in an embodiment of the present invention. As shown in FIG. 7, the method can be implemented as the following steps S71-S73:
  • step S71 the first lung medical image of the subject is acquired
  • step S72 image parameters of the affected part in the first lung medical image are acquired
  • step S73 the disease level of the lung of the subject corresponding to the first lung medical image information is output according to the image parameters of the affected part.
  • the interactive method of the embodiment involved in the present disclosure may be based on the necessary diagnostic information processing method, including determining the disease level of the subject's lung corresponding to the corresponding first lung medical image information.
  • first lung medical image involved in this embodiment may be the first target CT image involved in the foregoing embodiment.
  • FIG. 8 is a flowchart of a method for evaluating diagnostic information based on medical images in an embodiment of the present invention. As shown in FIG. 8, the method can be implemented as the following steps S81-S84:
  • step S81 the region of interest in the medical image is partitioned to obtain at least N partitions, where N is a natural number greater than or equal to 2;
  • step S82 at least calculate the proportion of the volume of the first sign and the volume of the second sign in each partition
  • step S83 obtain the corresponding scores of the volume of the first sign and the second sign, and obtain the scores of each zone based on the scores;
  • step S84 the region of interest is evaluated according to the score of each partition.
  • the region of interest in the medical image is partitioned to obtain at least N partitions, where N is a natural number greater than or equal to 2;
  • the region of interest In the field of machine vision and image processing, the area that needs to be processed is outlined in the form of boxes, circles, ellipses, irregular polygons, etc. from the processed image, which is called the region of interest.
  • the sense of the medical image The region of interest may be a certain human organ in the medical image.
  • the region of interest may be a human lung organ.
  • the outline of the region of interest is shown in FIG. 1B. There are two ways to partition the region of interest in the medical image:
  • the region of interest is the human lung
  • the N partitions are the right upper lobe, the right middle lung, the right lower lobe, the left upper lobe, and the left lower lobe.
  • the region of interest is the human lung, and the N partitions are divided into six partitions from top to bottom for the left and right lungs of the human lung.
  • the disease to be detected is pneumonia
  • pneumonia appears in the form of patches and/or ground glass in CT images, that is, patch shadows and ground glass shadows can exist in the lung CT images at the same time.
  • the first The first sign may refer to the patch area of the CT image of the human lung
  • the second sign may refer to the ground glass area of the CT image of the human lung.
  • the signs to be calculated are different. That is, the solution disclosed in this application can be used to calculate the proportion of the volume of the first sign and the volume of the second sign.
  • the volume proportion of other signs can also be calculated, for example, nodules, cavities, tree bud signs, orbit signs, etc., have been used in clinical diagnosis practice Used to reflect the signs of disease.
  • the region of interest is evaluated. Specifically, a corresponding score threshold can be set, and then based on the score threshold, the disease severity of the subject corresponding to the medical image is determined.
  • the beneficial effect of the present application is that the region of interest in the medical image can be partitioned, and the score of each partition can be calculated, so as to realize the quantification of the severity of the disease corresponding to the region of interest, and then the sensation can be compared based on the score obtained by the quantification process.
  • the disease severity of the region of interest is evaluated, and the effect of evaluating the severity of the disease based on the disease region of the medical image is realized.
  • step S81 may be implemented as the following steps:
  • the region of interest is the human lung
  • the N partitions are the right upper lobe, the right middle lung, the right lower lobe, the left upper lobe, and the left lower lung.
  • the structure of the human lung can be divided into five regions, namely the right upper lobe, the right middle lung, the right lower lobe, the left upper lobe, and the left lower lobe. Therefore, in this embodiment, it can be based on the distribution of the human body.
  • the structure is divided into partitions, that is, the N partitions are the right upper lobe, right middle lung, right lower lobe, left upper lobe, and left lower lung.
  • N partitions can also be determined based on lung segments.
  • Figure 9 is a schematic diagram of the distribution of human lung segments in medical images; as shown in Figure 9, the upper lobe of the right lung includes: apical segment, posterior segment, and anterior segment; The right middle lobe includes the lateral segment and the medial segment; the right lower lobe includes: the medial bottom segment, the anterior bottom segment, and the lateral bottom segment; and the left upper lobe includes: the posterior apical segment, the anterior segment, the upper tongue segment, and the lower tongue segment; right The lower lobe of the lung includes: the anterior bottom segment, the lateral bottom segment, and the medial bottom segment. Then, when partitioning based on lung segments, each lung segment can be regarded as a partition.
  • this partitioning method is based on the lung segments that can be displayed by lung medical images. Some areas that are not shown are not marked in Figure 9, such as the dorsal segment. Lung segment area.
  • step S81 can also be implemented as the following steps:
  • the region of interest is the human lung, and the N partitions are divided into six partitions from top to bottom for the left and right lungs of the human lung.
  • the left lung and the right lung are divided into three parts respectively, thereby forming six partitions.
  • the lung image is partitioned by two cutting lines, so that it is divided into six partitions, which are upper right, middle right, lower right, upper left, middle left, and lower left.
  • the first sign is a patchy area
  • the second sign is a ground glass area
  • the first sign may refer to the patch area of the CT image of the human lung
  • the second sign may refer to the ground glass area of the CT image of the human lung.
  • step S83 can also be implemented as the following steps S111-S113:
  • step S111 a first product is obtained by multiplying the volume fraction of the first sign by the first parameter
  • step S112 the second parameter is multiplied by the second parameter to obtain a second product according to the volume fraction of the second sign;
  • step S113 it is determined that the sum of the first product and the second product is the score of the partition corresponding to the first sign and the second sign.
  • the first product is obtained by multiplying the volume fraction of the first sign by the first parameter; and the volume fraction of the second sign is multiplied by the second parameter.
  • the second product; where the volume fraction of the first sign can be the fraction obtained by multiplying the volume fraction of the first sign by a specific coefficient. It can be understood that when the specific coefficient is 1, the volume of the first sign The percentage value is the volume percentage of the first sign itself. In the same way, the volume fraction of the second sign may be the fraction obtained by multiplying the volume fraction of the second sign by the specific coefficient.
  • the first parameter can be determined based on the relationship between the first sign and the probability of the target disease; the second parameter can be determined based on the relationship between the second sign and the probability of the target disease.
  • the score of the partition may be the first sign volume fraction value ⁇ 3+the second sign volume fraction value ⁇ 2.
  • step S84 can be implemented as the following steps S121-S125:
  • step S121 the first and second thresholds are set, where the second threshold is greater than the first threshold
  • step S122 the score is compared with the first and second thresholds respectively;
  • step S123 when the score is less than the first threshold, it is determined that the subject corresponding to the medical image is mild pneumonia
  • step S124 when the score is greater than or equal to the first threshold and less than the second threshold, it is determined that the subject corresponding to the medical image is moderate pneumonia;
  • step S125 when the score is greater than or equal to the second threshold, it is determined that the subject corresponding to the medical image is severe pneumonia.
  • the first and second thresholds are set, where the second threshold is greater than the first threshold; the score is compared with the first and second thresholds respectively; when the score is less than the first threshold, it is determined that the medical image corresponds to the subject to be inspected
  • the subject is mild pneumonia; when the score is greater than or equal to the first threshold and less than the second threshold, the subject corresponding to the medical image is determined to be moderate pneumonia; when the score is greater than or equal to the second threshold, the subject corresponding to the medical image is determined For severe pneumonia.
  • the beneficial effect of this embodiment is that by setting the threshold interval related to the score, the severity of the pneumonia patients currently suffering from pneumonia can be evaluated.
  • pneumonia is divided into mild pneumonia, moderate pneumonia, and severe pneumonia according to the severity; when the score belongs to the score interval When it is the first score interval, it is determined that the subject corresponding to the medical image is mild pneumonia; when the score belongs to the second score interval, it is determined that the subject corresponding to the medical image is moderate pneumonia; when the score belongs to When the score interval is the third score interval, it is determined that the subject corresponding to the medical image is severe pneumonia.
  • FIG. 13 is a flowchart of a method for evaluating diagnostic information based on medical imaging in an embodiment of the present invention. As shown in FIG. 13, the method can be implemented as the following steps S131-S133:
  • step S131 the first lung medical image of the subject is acquired
  • step S132 image parameters of the affected part in the first lung medical image are acquired
  • step S133 the disease level of the lung of the subject corresponding to the first lung medical image information is output according to the image parameters of the affected part.
  • the interactive method of the embodiments involved in the present disclosure may be based on the necessary diagnostic information processing method, including determining the disease level of the subject's lung corresponding to the corresponding first lung medical image information.
  • first lung medical image involved in this embodiment may be the medical image involved in the foregoing embodiment.
  • this application also discloses a method for displaying diagnostic information based on medical images.
  • 14 is a flowchart of a method for displaying diagnostic information based on medical images in an embodiment of the present invention, as shown in FIG. As shown in 14, the method can be implemented as the following steps S141-S142:
  • step S141 the partitions of the medical image are displayed through the display interface
  • step S142 in response to the calculation of the image parameters of the first sign and the second sign in each partition, the diagnostic information is output on the display interface;
  • the diagnostic information includes at least one of the following:
  • the partitions of the lung medical image are displayed through the display interface.
  • the lung is divided into five zones.
  • at least one piece of diagnostic information is output on the display interface: the first sign and the second sign.
  • this embodiment is a method for displaying diagnostic information based on medical imaging disclosed in combination with the aforementioned method for evaluating diagnostic information based on medical imaging. Therefore, it is not difficult to understand that the medical imaging involved in this embodiment can be implemented as described above.
  • the medical image involved in the example, and the partition involved in this embodiment can also be determined by the partition method described in the embodiment corresponding to the aforementioned method for evaluating diagnostic information based on medical images.
  • the first sign involved in this embodiment can be It is a patchy area, and the second sign can be a ground glass area.
  • the volume ratio of the first sign and the second sign, the score based on the volume of the first sign and the second sign, and the evaluation result of the medical image based on the score can all be based on the aforementioned diagnostic information based on medical imaging.
  • the evaluation method corresponds to the solution described in the embodiment.
  • the first lung medical image in the foregoing embodiment may be the first target CT image involved in the foregoing embodiment.
  • the second lung medical image of the subject Image, the second lung medical image and the first lung medical image in the foregoing embodiment are lung medical images of the same subject at different times" and "the second data is a different time from the first target CT image
  • the acquired CT value density data of the region of interest in the second target CT image is not difficult to see, and the second lung medical image may also be the second target CT image.
  • the difference between the first lung medical image, the second lung medical image, the first target CT image, and the second target CT image is only that the generation time is different, and the first lung medical image can also be a medical image. Therefore, It is not difficult to understand that the medical image may also be the first lung medical image, the second lung medical image, the first target CT image, and the second target CT image.
  • multiple lung medical images are rendered with multiple colors.
  • the parts with different CT values and/or CT value ranges in the rendered lung medical images correspond to different The color; output rendered multiple lung medical images" and "form a first graphic based on the first data, wherein the first graphic is represented by the first color, the first data is the first The CT value density data of the region of interest in the target CT image; forming a second graph based on the second data; wherein the second graph is represented by a second color; determining the overlapping part of the first graph and the second graph, and
  • the embodiment of "indicating the overlapping part by the third color” corresponds to the example, for example, the CT value may include CT value density data, "the part of the rendered lung medical image with different CT values and/or CT value ranges Corresponding to different colors" corresponds to "the first graphic is represented by a first color, the second graphic is represented by a second color, and the overlapping part is represented by a third color".
  • FIG. 16 is a flowchart corresponding to a general embodiment in an embodiment of the present invention, and the method can be implemented as the following steps S161-S162:
  • step S161 a first lung medical image and a second lung medical image of the subject are acquired, wherein the first lung medical image and the second lung medical image are acquired for the same subject at different times Medical images of the lungs;
  • step S162 for the first lung medical image and/or the second lung medical image, at least one of the following target information is determined:
  • the disease level of the subject's lungs The disease level of the subject's lungs, the probability distribution of CT values of lung medical images at different times, the disease assessment results of the region of interest in the lung medical images, disease development trend information, and disease development speed.
  • the disease level of the lungs of the subject is determined in the following way:
  • the disease level of the lung of the subject corresponding to the first lung medical image information is determined according to the image parameters of the affected part.
  • the probability distribution of CT values of lung medical images at different times is determined in the following manner:
  • the first graph and the second graph are placed in the same coordinate system to form a comparison graph of the probability distribution of CT values in lung medical images at different times, wherein the first graph is represented by a first color, The second graphic is represented by a second color, and the overlapping part of the first graphic and the second graphic is represented by a third color.
  • the disease assessment result of the region of interest is determined according to the following method:
  • the evaluation of the region of interest according to the score of each partition includes:
  • a corresponding score threshold is set, and then based on the score threshold, the severity of the disease of the subject corresponding to the medical image is determined.
  • Disease development trend information of the subject of the disease different development trends (first development trend or second development trend) correspond to different diagnosis results; based on the generation time of the first lung medical image and the second lung medical image and the volume change of the affected part
  • the trend calculates the disease development speed of the subject, and obtains the volume of the affected part of the first lung medical image and the second lung medical image through the neuron network, and the structure of the neuron network.
  • the severity of the disease can only be judged by the level of the disease. It cannot be judged whether the disease is developing in a good direction or a worse direction, nor can it be judged when the disease can be ok when external influences are excluded. Healed, or when will it progress to the next level of disease. Therefore, a variety of information is needed to synthesize "is the disease to develop in a good direction or a worse direction", "when the disease can be cured, or what It will progress to the next level of disease” and other situations to make more accurate predictions.
  • various target information of different types can be obtained, such as disease level, probability distribution of CT values of lung medical images at different times, disease assessment results of regions of interest in lung medical images, and disease development trend information, etc. , which can determine the various conditions of the subject’s lung diseases from multiple angles, which is conducive to the diagnosis of various lung diseases (such as new coronavirus pneumonia), provides richer materials and diagnostic evidence, and helps Physicians or diagnostic equipment can diagnose diseases more accurately.
  • the condition of the disease when there are multiple types of target information, the condition of the disease can be comprehensively judged through multiple types of target information.
  • the disease grade is divided into grade 1, grade 2, grade 3, and grade 4 according to severity from light to severe.
  • the target information is disease grade and disease development trend
  • the current disease grade is determined to be 2 based on current medical images, and According to the development trend of the disease, it is determined that the volume of the affected part is decreasing, then it can be predicted that the disease will develop to level 1 in the future;
  • the target information is disease level, disease development trend, and disease development speed.
  • the current disease level is determined to be 2, and the volume of the affected part is decreasing according to the disease development trend. Then, it can be combined with disease development. Speed predicts when the disease will return to level 1. So as to get a more comprehensive judgment result.
  • the embodiments of the present invention can be provided as a method, a system, or a computer program product. Therefore, the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may be in the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physiology (AREA)
  • Geometry (AREA)
  • Pulmonology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种基于医学影像的诊断信息处理方法、装置及存储介质,用以基于医学影像实现对疾病的分级。所述方法包括:获取受检对象的第一肺部医学影像(S11);获取所述第一肺部医学影像中受累部位的影像参数(S12);根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级(S13)。采用上述方案,能够获取所述第一肺部医学影像中受累部位的影像参数,然后根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级,从而能够基于医学影像对疾病进行分级。

Description

一种基于医学影像的诊断信息处理方法、装置及存储介质 技术领域
本发明涉及互联网技术领域,特别涉及一种基于医学影像的诊断信息处理方法、装置及存储介质。
背景技术
目前,很多肺部疾病可以通过CT影像检出,但是,目前通过CT影像检出疾病只能做阳性诊断,而不能对疾病严重程度进行判断。
但是,有些疾病需要快速得出疾病严重程度,针对不同等级的疾病快速制定出相应的处理方案。例如,新型冠状病毒性肺炎,其传播速度迅猛,需要做到早发现、早诊断、早隔离、早治疗。针对这类疾病,需要快速判断这类疾病的严重程度,进而实现疾病分级。因此,如何提供方法,进而基于医学影像实现对疾病的分级,是一亟待解决的技术问题。
发明内容
本发明提供一种基于医学影像的诊断信息处理方法,用以基于医学影像实现对疾病的分级。
本发明提供一种基于医学影像的诊断信息处理方法,包括:
获取受检对象的第一肺部医学影像;
获取所述第一肺部医学影像中受累部位的影像参数;
根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级。
本发明的有益效果在于:能够获取所述第一肺部医学影像中受累部位的影像参数,然后根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级,从而能够基于医学影像对疾病进行分级。
在一个实施例中,所述获取所述第一肺部医学影像中受累部位的影像参数,包括:
获取肺内正常CT值分布区间以及受累部位CT值分布区间;
将至少一张第一肺部医学影像输入到神经元网络中,以确定出所述第一肺部医学影像中受累部位的体积。
在一个实施例中,所述神经元网络包括:
用于检出候选斑片影的第一检出模型、切割模型、用于检出斑片影区间的第二检出模型以及用于计算受累部位体积的体积计算模型;
将至少一张第一肺部医学影像输入到神经元网络中,以确定出所述第一肺部医学影像中受累部位的体积,包括:
将所述至少一张第一肺部医学影像经由第一检出模型中的N个连续的卷积特征提取模块,以使所述N个连续的卷积特征提取模块得到所述第一肺部医学影像中斑片影的图像特征,其中,N为正整数;
将所述第一肺部医学影像中受累部位的图像特征输入到第一检出模型中的全连接层,以使所述全连接层基于所述图像特征输出候选斑片影;
将所述候选斑片影经由切割模型,以使所述切割模型对所述候选斑片影在空间上进行不同方向的多次切割,得到所述候选斑片影在空间的多个方向上的多个切面影像;
将多张连续的切面影像经由第二检出模型中的M个连续的卷积特征提取模块,以使所述M个连续的卷积特征提取模块得到所述切面影像的图像特征,其中,M为正整数;
将所述切面影像的图像特征输入到第二检出模型中的全连接层,以使所述全连接层基于所述图像特征输出斑片影信息;
将所述斑片影信息经由所述体积计算模型,以使所述体积计算模型计算出所述第一肺部医学影像中受累部位的体积。
本实施例的有益效果在于:通过多种模型连接形成的神经元网络,能够同时实现斑片影检出和体积计算,简化了确定受累部位体积的方法。
在一个实施例中,根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级,包括:
将所述受累部位的体积与目标关系表进行比对,其中,所述目标关系表中存储有受累部位体积与疾病等级的对应关系;
根据比对结果确定所述受检对象肺部的疾病等级。
在一个实施例中,根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级,包括:
计算受累部位在肺内的体积占比;
将所述受累部位的体积和所述受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于所述受累部位体积和所述受累部位在肺内的体积占比综合计算得到的所述受检对象肺部的疾病等级。
在一个实施例中,所述方法还包括:
获取受检对象的第二肺部医学影像;
获取第二肺部医学影像中受累部位的体积;
将所述第二肺部医学影像中受累部位的体积与所述第一肺部医学影像中受累部位的体积进行比对,以确定受累部位体积变化趋势;
根据所述受累部位体积变化趋势确定所述受检对象肺部疾病的发展趋势信息。
本实施例的有益效果在于:能够基于同一受检对象不同的肺部医学影像判断受累部位体积变化趋势,从而通过受累部位体积变化趋势自动确定所述受检对象肺部疾病的发展趋势信息。
在一个实施例中,根据所述受累部位体积变化趋势确定所述受检对象肺部疾病的发展 趋势,包括:
当所述受累部位的体积符合第一发展趋势,确定所述受检对象的第一诊断结果;
当所述受累部位的体积符合第二发展趋势,确定所述受检对象的第二诊断结果。
在一个实施例中,所述方法还包括:
获取第一肺部医学影像和第二肺部医学影像的生成时间;
根据所述生成时间和受累部位体积变化趋势计算所述受检对象疾病发展速度。
在一个实施例中,方法还包括:
基于单一颜色对所述第一肺部医学影像进行渲染,以生成第三肺部医学影像,其中,渲染后的颜色深度与CT值正相关;和/或
基于多种颜色对所述第一肺部医学影像进行渲染,以生成第四肺部医学影像,其中,不同的CT值通过不同类型的颜色进行渲染;
输出所述第一肺部医学影像、第三肺部医学影像和/或第四肺部医学影像。
在一个实施例中,方法还包括:
通过多种颜色对多个肺部医学影像进行渲染,渲染后的肺部医学影像中不同CT值和/或CT值范围的部分对应于不同的颜色;
输出渲染后的多个肺部医学影像。
本申请还提供一种诊断信息界面的显示方法,包括:
基于第一数据形成第一图形,其中,所述第一图形以第一颜色表示,所述第一数据是第一目标CT图像中的感兴趣区域CT值密度数据;
基于第二数据形成第二图形;其中,所述第二图形以第二颜色表示;
确定所述第一图形和第二图形的重叠部位,并通过第三颜色表示所述重叠部位,其中,所述第一图形和第二图形用于表征所述第一目标CT图像中疾病的严重程度。
在一个实施例中,所述基于第一数据形成第一图形,包括:
响应于获取所述第一目标CT图像中的感兴趣区域的CT值密度数据,确定所述第一数据。
在一个实施例中,所述第二数据为CT图像中感兴趣区域的基准数据。
在一个实施例中,所述第二数据为与第一目标CT图像不同时间获取的第二目标CT图像中的感兴趣区域CT值密度数据。
在一个实施例中,所述感兴趣区域包括于以下至少一个区域:
人体肺部器官、左肺、右肺右肺上叶、右肺中叶、右肺下叶、左肺上叶以及左肺下叶。
本实施例中,还提供一种基于医学影像的诊断信息交互方法,包括:
获取受检对象的第一肺部医学影像;
获取所述第一肺部医学影像中受累部位的影像参数;
根据所述受累部位的影像参数,输出所述第一肺部医学影像信息对应的受检对象肺部的疾病等级。
本申请还提供一种基于医学影像的诊断信息评估方法,包括:
对医疗图像中的感兴趣区域进行分区,获得至少N个分区,其中N为大于等于2的自然数;
至少计算每个分区中的第一征象的体积和第二征象的体积占比;
获取所述第一征象和第二征象体积占比对应分值,并基于所述分值获取每个分区的分数;
根据每个分区的分数,对所述感兴趣区域进行评估;
其中,根据每个分区的分数,对所述感兴趣区域进行评估,包括:
设置相应的分数阈值,然后基于所述分数阈值确定医疗图像对应的受检对象的疾病严重程度。
在一个实施例中,对医疗图像中的感兴趣区域进行分区,包括:
获得所述感兴趣区域的至少N个分区,所述感兴趣区域为人体肺部,所述N个分区为右肺上叶、右肺中叶、右肺下叶、左肺上叶和左肺下叶。
在一个实施例中,对医疗图像中的感兴趣区域进行分区,包括:
获得所述感兴趣区域的至少N个分区,所述感兴趣区域为人体肺部,所述N个分区对人体肺部的左右肺,由上至下分为三份后的六个分区。
在一个实施例中,所述第一征象为斑片区域,所述第二征象为磨玻璃区域。
在一个实施例中,获取所述第一征象和第二征象体积占比对应分值,并基于所述分值获取每个分区的分数,包括:
根据第一征象的体积占比分值乘以第一参数获得第一乘积;
根据第二征象的体积占比分值乘以第二参数获得第二乘积;
确定第一乘积和第二乘积的和值为所述第一征象和第二征象对应分区的分数。
在一个实施例中,所述根据每个分区的分数,对所述感兴趣区域进行评估,包括:
设置第一、第二阈值,其中所述第二阈值大于所述第一阈值;
将所述分数分别与所述第一、第二阈值进行比较;
当所述分数小于第一阈值时确定所述医疗图像对应的受检对象为轻度肺炎;
当所述分数大于等于第一阈值,且小于第二阈值时确定所述医疗图像对应的受检对象为中度肺炎;
当所述分数大于或等于第二阈值时确定所述医疗图像对应的受检对象为重度肺炎。
本申请还提供一种基于医学影像的诊断信息评估方法,其特征在于,包括:
获取受检对象的第一肺部医学影像;
获取所述第一肺部医学影像中受累部位的影像参数;
根据所述受累部位的影像参数,输出所述第一肺部医学影像信息对应的受检对象肺部的疾病等级;
其中,根据所述受累部位的影像参数,输出所述第一肺部医学影像信息对应的受检对象肺部的疾病等级,包括:
将所述受累部位的体积与目标关系表进行比对,其中,所述目标关系表中存储有受累 部位体积与疾病等级的对应关系;
根据比对结果确定并输出所述受检对象肺部的疾病等级;
或者
计算受累部位在肺内的体积占比;
将所述受累部位的体积和所述受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于所述受累部位体积和所述受累部位在肺内的体积占比综合计算得到的所述受检对象肺部的疾病等级。
本申请还提供基于医学影像的诊断信息显示方法,包括:
通过显示界面对医学影像的分区进行显示;
响应于对各分区中第一征象和第二征象的影像参数的计算,在所述显示界面上输出诊断信息;
所述诊断信息包括以下至少一项:
第一征象和第二征象的体积占比、基于第一征象和第二征象的体积得到的分数、基于分数得到的所述医学影像的评估结果。
本发明还提供一种基于医学影像的诊断信息处理装置,包括:
第一获取模块,用于获取受检对象的第一肺部医学影像;
第二获取模块,用于获取所述第一肺部医学影像中受累部位的影像参数;
确定模块,用于根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级。
在一个实施例中,第二获取模块,包括:
输入子模块,用于将至少一张第一肺部医学影像输入到神经元网络中,以确定出所述第一肺部医学影像中受累部位的体积。
在一个实施例中,所述神经元网络包括:
用于检出候选斑片影的第一检出模型、切割模型、用于检出斑片影区间的第二检出模型以及用于计算受累部位体积的体积计算模型;
输入子模块,用于:
将所述至少一张第一肺部医学影像经由第一检出模型中的N个连续的卷积特征提取模块,以使所述N个连续的卷积特征提取模块得到所述第一肺部医学影像中斑片影的图像特征,其中,N为正整数;
将所述第一肺部医学影像中受累部位的图像特征输入到第一检出模型中的全连接层,以使所述全连接层基于所述图像特征输出候选斑片影;
将所述候选斑片影经由切割模型,以使所述切割模型对所述候选斑片影在空间上进行不同方向的多次切割,得到所述候选斑片影在空间的多个方向上的多个切面影像;
将多张连续的切面影像经由第二检出模型中的M个连续的卷积特征提取模块,以使所述M个连续的卷积特征提取模块得到所述切面影像的图像特征,其中,M为正整数;
将所述切面影像的图像特征输入到第二检出模型中的全连接层,以使所述全连接层基 于所述图像特征输出斑片影信息;
将所述斑片影信息经由所述体积计算模型,以使所述体积计算模型计算出所述第一肺部医学影像中受累部位的体积。
在一个实施例中,确定模块,包括:
比对子模块,用于将所述受累部位的体积与目标关系表进行比对,其中,所述目标关系表中存储有受累部位体积与疾病等级的对应关系;
第一确定子模块,用于根据比对结果确定所述受检对象肺部的疾病等级。
在一个实施例中,确定模块,包括:
计算子模块,用于计算受累部位在肺内的体积占比;
输入子模块,用于将所述受累部位的体积和所述受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于所述受累部位体积和所述受累部位在肺内的体积占比综合计算得到的所述受检对象肺部的疾病等级。
在一个实施例中,所述装置还包括:
第三获取模块,用于获取受检对象的第二肺部医学影像;
第四获取模块,用于获取第二肺部医学影像中受累部位的体积;
比对模块,用于将所述第二肺部医学影像中受累部位的体积与所述第一肺部医学影像中受累部位的体积进行比对,以确定受累部位体积变化趋势;
变化趋势确定模块,用于根据所述受累部位体积变化趋势确定所述受检对象肺部疾病的发展趋势信息。
在一个实施例中,变化趋势确定模块,包括:
第二确定子模块,用于当所述受累部位的体积符合第一发展趋势,确定所述受检对象的第一诊断结果;
第三确定子模块,用于当所述受累部位的体积符合第二发展趋势,确定所述受检对象的第二诊断结果。
在一个实施例中,所述装置还包括:
第五获取模块,用于获取第一肺部医学影像和第二肺部医学影像的生成时间;
计算模块,用于根据所述生成时间和受累部位体积变化趋势计算所述受检对象疾病发展速度。
在一个实施例中,装置还包括:
第一渲染模块,用于基于单一颜色对所述第一肺部医学影像进行渲染,以生成第三肺部医学影像,其中,渲染后的颜色深度与CT值正相关;
第二渲染模块,用于基于多种颜色对所述第一肺部医学影像进行渲染,以生成第四肺部医学影像,其中,不同的CT值通过不同类型的颜色进行渲染;
第一输出模块,用于输出所述第一肺部医学影像、第三肺部医学影像和/或第四肺部医学影像。
在一个实施例中,装置还包括:
第三渲染模块,用于通过多种颜色对多个肺部医学影像进行渲染,渲染后的肺部医学影像中不同CT值和/或CT值范围的部分对应于不同的颜色;
第二输出模块,用于输出渲染后的多个肺部医学影像。
本申请还提供一种非临时性可读存储介质,当所述存储介质中的指令由设备内的处理器执行时,使得所述设备能够执行上述任一实施例所涉及的方法。
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。
附图说明
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:
图1A为本发明一实施例中一种基于医学影像的诊断信息处理方法的流程图;
图1B为通过分割线对医学影像中肺部区域进行标注的示意图。
图2A为本发明另一实施例中一种基于医学影像的诊断信息处理方法的流程图;
图2B为执行本发明所提供的方案的系统的界面示意图。
图3A为本发明又一实施例中一种基于医学影像的诊断信息处理方法的流程图;
图3B为新型冠状病毒性肺炎不同病程的发展趋势评估示意图;
图3C为第一肺部医学影像及通过不同方式渲染后的肺部医学影像的比对图;
图3D为正常肺部CT值与特定疾病肺部CT值分布对照示意图;
图4为本发明一实施例中一种基于医学影像的诊断信息处理装置的框图;
图5为本发明一实施例中一种诊断信息界面的显示方法的流程图;
图6为包含第一图形和第二图形的对照示意图;
图7为本发明一实施例中一种基于医学影像的诊断信息交互方法的流程图;
图8为本发明一实施例中一种基于医疗图像的诊断信息评估方法的流程图;
图9为医学影像中人体肺段的分布示意图;
图10为通过分割线将人体肺部分为六个分区的示意图;
图11为本发明一实施例中一种基于医疗图像的诊断信息评估方法的流程图;
图12为本发明一实施例中一种基于医疗图像的诊断信息评估方法的流程图;
图13为本发明一实施例中一种基于医学影像的诊断信息评估方法的流程图;
图14为本发明一实施例中一种基于医学影像的诊断信息显示方法的流程图;
图15为新型冠状病毒肺炎评估界面;
图16为本发明一实施例中一总的实施例对应的流程图。
具体实施方式
此处参考附图描述本申请的各种方案以及特征。
应理解的是,可以对此处申请的实施例做出各种修改。因此,上述说明书不应该视为限制,而仅是作为实施例的范例。本领域的技术人员将想到在本申请的范围和精神内的其他修改。
包含在说明书中并构成说明书的一部分的附图示出了本申请的实施例,并且与上面给出的对本申请的大致描述以及下面给出的对实施例的详细描述一起用于解释本申请的原理。
通过下面参照附图对给定为非限制性实例的实施例的优选形式的描述,本申请的这些和其它特性将会变得显而易见。
还应当理解,尽管已经参照一些具体实例对本申请进行了描述,但本领域技术人员能够确定地实现本申请的很多其它等效形式,它们具有如权利要求所述的特征并因此都位于借此所限定的保护范围内。
当结合附图时,鉴于以下详细说明,本申请的上述和其他方面、特征和优势将变得更为显而易见。
此后参照附图描述本申请的具体实施例;然而,应当理解,所申请的实施例仅仅是本申请的实例,其可采用多种方式实施。熟知和/或重复的功能和结构并未详细描述以避免不必要或多余的细节使得本申请模糊不清。因此,本文所申请的具体的结构性和功能性细节并非意在限定,而是仅仅作为权利要求的基础和代表性基础用于教导本领域技术人员以实质上任意合适的详细结构多样地使用本申请。
本说明书可使用词组“在一种实施例中”、“在另一个实施例中”、“在又一实施例中”或“在其他实施例中”,其均可指代根据本申请的相同或不同实施例中的一个或多个。
图1A为本发明一实施例中一种基于医学影像的诊断信息处理方法的流程图,如图1A所示,该方法可被实施为以下步骤S11-S13:
在步骤S11中,获取受检对象的第一肺部医学影像;
在步骤S12中,获取第一肺部医学影像中受累部位的影像参数;
在步骤S13中,根据受累部位的影像参数确定第一肺部医学影像信息对应的受检对象肺部的疾病等级。
本实施例中,获取受检对象的第一肺部医学影像;该第一肺部医学影像可以是受检对象胸部的CT图,该CT图中,已经标注出肺部区域,具体的,可以通过人工标注的方式来实现。当然,在上述步骤S11之前,还可以包括一分割肺部区域的步骤,具体的,将胸部医学影像输入到预先训练的用于分割肺部区域的神经元网络中,从而通过神经元网络对胸部医学影像中的肺部区域进行识别和标注,具体的,在通过神经元网络识别出肺部之后,通过分割线来标注肺部,如图1B所示,通过黑色的分割线来对肺部进行标注,可以理解的是,该分割线也可以是其他颜色,通过该分割步骤,可以实现对胸部影像中肺部区域的标注,从而得到第一肺部医学影像,当然,该分割步骤也可以让用户验证分割结果的准确性。
该医学影像中受累部位区域的CT值和正常肺部区域的CT值不同。在医学领域中,受累是指由疾病而导致某器官或某部位组织的功能或器质性的改变,受累部位是指由疾病而导致的发生功能或器质性改变的部位。该医学影像中受累部位区域的CT值和正常肺部区域的CT值不同。在医学领域中,受累是指由疾病而导致某器官或某部位组织的功能或器质性的改变,受累部位是指由疾病而导致的发生功能或器质性改变的部位。在临床中,CT胸部影像可以通过受累部位的影像显示、表征相应的病变部位,诸如被冠状病毒感染的肺部,例如新型冠状病毒、2019-nCoV病毒等等。通过以下的详细描述,应当认为,本申请可以具体细化到肺部所包含的所有肺叶上的病变信息处理、病变影像显示,以及相应的诊断信息的输出。
获取第一肺部医学影像中受累部位的影像参数,具体的,可以将至少一张第一肺部医学影像输入到神经元网络中,以确定出第一肺部医学影像中受累部位的影像参数,通常情况下,影像参数中包括受累部位的体积。
根据受累部位的影像参数确定第一肺部医学影像信息对应的受检对象肺部的疾病等级,具体的,可以通过如下方式确定第一肺部医学影像信息对应的受检对象肺部的疾病等级:
方式一
预先创建一关系表,该关系表中包含受累部位体积与疾病等级的对应关系。可以将受累部位的体积与目标关系表进行比对,其中,目标关系表中存储有受累部位体积与疾病等级的对应关系;根据比对结果确定受检对象肺部的疾病等级。
方式二
计算受累部位在肺内的体积占比;将受累部位的体积和受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于受累部位体积和受累部位在肺内的体积占比综合计算得到的受检对象肺部的疾病等级。
本发明的有益效果在于:能够获取第一肺部医学影像中受累部位的影像参数,然后根据受累部位的影像参数确定第一肺部医学影像信息对应的受检对象肺部的疾病等级,从而能够基于医学影像对疾病进行分级。
在一个实施例中,上述步骤S12可被实施为如下步骤:
将至少一张第一肺部医学影像输入到神经元网络中,以确定出第一肺部医学影像中受累部位的体积。
在一个实施例中,神经元网络包括:
用于检出候选斑片影的第一检出模型、切割模型、用于检出斑片影区间的第二检出模型以及用于计算受累部位体积的体积计算模型;
上述步骤将肺内正常CT值分布区间、受累部位CT值分布区间以及至少一张第一肺部医学影像输入到神经元网络中,以确定出第一肺部医学影像中受累部位的体积,可被实施为如下步骤A1-A6:
在步骤A1中,将至少一张第一肺部医学影像经由第一检出模型中的N个连续的卷积 特征提取模块,以使N个连续的卷积特征提取模块得到第一肺部医学影像中斑片影的图像特征,其中,N为正整数;
在步骤A2中,将第一肺部医学影像中受累部位的图像特征输入到第一检出模型中的全连接层,以使全连接层基于图像特征输出候选斑片影;
在步骤A3中,将候选斑片影经由切割模型,以使切割模型对候选斑片影在空间上进行不同方向的多次切割,得到候选斑片影在空间的多个方向上的多个切面影像;
在步骤A4中,将多张连续的切面影像经由第二检出模型中的M个连续的卷积特征提取模块,以使M个连续的卷积特征提取模块得到切面影像的图像特征,其中,M为正整数;
在步骤A5中,将切面影像的图像特征输入到第二检出模型中的全连接层,以使全连接层基于图像特征输出斑片影信息;
在步骤A6中,将斑片影信息经由体积计算模型,以使体积计算模型计算出第一肺部医学影像中受累部位的体积。
本实施例中,神经元网络是由多种模型连接形成的,其中,该神经元网络包括用于检出候选斑片影的第一检出模型、切割模型、用于检出斑片影区间的第二检出模型以及用于计算受累部位体积的体积计算模型。
其中,该第一检出模型包含输入层、N个连续的卷积特征提取模块、全连接层、输出层,卷积特征提取模块包括多个卷积模块,卷积模块中包括卷积层、BN层及激励层。
第二检出模型和第一检出模型的结构相同,在此不做赘述。
将至少一张第一肺部医学影像经由第一检出模型中的N个连续的卷积特征提取模块时,针对N个卷积特征提取块中任意三个连续的卷积特征提取模块,第一个卷积特征提取模块和第二个卷积特征提取模块输出的图像特征相加后作为第三个卷积特征提取块的输入。同理,将多张连续的切面影像经由第二检出模型中的M个连续的卷积特征提取模块时,针对M个卷积特征提取块中任意三个连续的卷积特征提取模块,第一个卷积特征提取模块和第二个卷积特征提取模块输出的图像特征相加后作为第三个卷积特征提取块的输入。
另外,上述步骤中第二检出模型中的卷积特征提取模块的数量M可以等于第一检出模型中的卷积特征提取模块的数量N,也可以不等于N。
本实施例的有益效果在于:通过多种模型连接形成的神经元网络,能够同时实现斑片影检出和体积计算,简化了确定受累部位体积的方法。
在一个实施例中,如图2A所示,上述步骤S13可被实施为如下步骤S21-S22:
在步骤S21中,将受累部位的体积与目标关系表进行比对,其中,目标关系表中存储有受累部位体积与疾病等级的对应关系;
在步骤S22中,根据比对结果确定受检对象肺部的疾病等级。
本实施例中,预先创建一关系表,该关系表中包含受累部位体积与疾病等级的对应关系。可以将受累部位的体积与目标关系表进行比对,其中,目标关系表中存储有受累部位体积与疾病等级的对应关系;根据比对结果确定受检对象肺部的疾病等级。
在一个实施例中,上述步骤S13可被实施为如下步骤B1-B2:
在步骤B1中,计算受累部位在肺内的体积占比;
在步骤B2中,将受累部位的体积和受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于受累部位体积和受累部位在肺内的体积占比综合计算得到的受检对象肺部的疾病等级。
本实施例中,计算受累部位在肺内的体积占比;将受累部位的体积和受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于受累部位体积和受累部位在肺内的体积占比综合计算得到的受检对象肺部的疾病等级。
在该实施例中,具体受累部位在肺内的体积占比也可以通过预先训练的体积占比计算模型来计算,将医学影像输入体积占比计算模型之后,模型可以自动给出各个CT区间的体积占比,图2B为执行本发明所提供的方案的系统的界面示意图,如图2B所示,体积占比计算模型计算出的受累区域的体积现实与该界面示意图的双肺体积分析栏中。
在一个实施例中,如图3A所示,方法还可被实施为如下步骤S31-S34:
在步骤S31中,获取受检对象的第二肺部医学影像;
在步骤S32中,获取第二肺部医学影像中受累部位的体积;
在步骤S33中,将第二肺部医学影像中受累部位的体积与第一肺部医学影像中受累部位的体积进行比对,以确定受累部位体积变化趋势;
在步骤S34中,根据受累部位体积变化趋势确定受检对象肺部疾病的发展趋势信息。
本实施例中,获取受检对象的第二肺部医学影像,该第二肺部医学影像和前述实施例中的第一肺部医学影像是同一受检对象不同时期的肺部医学影像,将第二肺部医学影像中受累部位的体积与第一肺部医学影像中受累部位的体积进行比对,以确定受累部位体积变化趋势;根据受累部位体积变化趋势确定受检对象肺部疾病的发展趋势信息。
举例而言,受检对象的病势会随着时间的推移加重或者减轻,因此,可以基于不同时间段的肺部医学影像确定受检对象肺部疾病的发展趋势信息。具体的,首先获取该受检对象的ID,该受检对象的ID获得该受检对象的第二肺部医学影像,该第二肺部医学影像的生成时间可以早于第一肺部医学影像,也可以晚于第一肺部医学影像,只要是第一肺部医学影像和第二肺部医学影像的生成时间不同即可,另外,考虑到时间跨度太小病势变化不明显,因此,第一肺部医学影像和第二肺部医学影像的生成时间的间隔不小于某个特定值,如48小时。图3B为新型冠状病毒性肺炎的评估示意图,该示意图中包含了第一肺部医学影像和第二肺部医学影像的比对结果,如图3B所示,在获取受检对象的第二肺部医学影像之后,获取第二肺部医学影像中受累部位的体积,然后将第二肺部医学影像中受累部位的体积与第一肺部医学影像中受累部位的体积进行比对,以确定受累部位体积变化趋势,根据受累部位体积变化趋势确定受检对象肺部疾病的发展趋势信息。例如,图3B中,从图右侧新型肺炎评估界面中可以看到,右肺受累部位的体积从20%下降到10%,左肺受累部位的体积从30%下降到20%,即受累部位体积随时间变化而减小,确定受检对象肺部疾病的病势减轻。可以理解的,如果受累部位体积随时间变化而增大,确定受检对象肺部疾病的病势加重。进一步的,可以用更加直观的方式表示受累部位体积变化趋势,例如,用 箭头表示受累部位体积的变化趋势,用箭头结合具体数值表示受累部位体积的变化趋势,当然,还可以有其他方式表示,在此不一一赘述。
本实施例的有益效果在于:能够基于同一受检对象不同的肺部医学影像判断受累部位体积变化趋势,从而通过受累部位体积变化趋势自动确定受检对象肺部疾病的发展趋势信息。
在一个实施例中,上述步骤S34可被实施为如下步骤C1-C2:
在步骤C1中,当受累部位的体积符合第一发展趋势,确定受检对象的第一诊断结果;
在步骤C2中,当受累部位的体积符合第二发展趋势,确定受检对象的第二诊断结果。
当受累部位的体积符合第一发展趋势,确定受检对象的第一诊断结果;
举例而言,假设第一肺部医学影像的生成时间晚于第二肺部医学影像,那么,当第一肺部医学影像中的受累部位体积小于第二肺部医学影像中的受累部位体积时,则受累部位的体积减小。假设第一肺部医学影像的生成时间早于第二肺部医学影像,那么,当第一肺部医学影像中的受累部位体积大于第二肺部医学影像中的受累部位体积时,则受累部位的体积减小。当受累部位的体积减小时,确定受检对象的第一诊断结果,即受检对象的病势在减轻。
当受累部位的体积符合第二发展趋势,确定受检对象的第二诊断结果;
假设第一肺部医学影像的生成时间晚于第二肺部医学影像,那么,当第一肺部医学影像中的受累部位体积大于第二肺部医学影像中的受累部位体积时,则受累部位的体积增大。假设第一肺部医学影像的生成时间早于第二肺部医学影像,那么,当第一肺部医学影像中的受累部位体积小于第二肺部医学影像中的受累部位体积时,则受累部位的体积增大。当受累部位的体积增大时,确定受检对象的第二诊断结果,即受检对象的病势在加重。
在一个实施例中,方法还可被实施为如下步骤D1-D2:
在步骤D1中,获取第一肺部医学影像和第二肺部医学影像的生成时间;
在步骤D2中,根据生成时间和受累部位体积变化趋势计算受检对象疾病发展速度。
本实施例中,可以获取第一肺部医学影像和第二肺部医学影像的生成时间,根据生成时间确定第一肺部医学影像和第二肺部医学影像生成时间间隔,然后基于该时间间隔和受累部位体积变化幅度计算单位时间内受累部位体积变化幅度,从而得到受检对象疾病发展速度。
在一个实施例中,方法还可被实施为如下步骤E1和/或E2-E3:
在步骤E1中,基于单一颜色对第一肺部医学影像进行渲染,以生成第三肺部医学影像,其中,渲染后的颜色深度与CT值正相关;
在步骤E2中,基于多种颜色对第一肺部医学影像进行渲染,以生成第四肺部医学影像,其中,不同的CT值通过不同类型的颜色进行渲染;
在步骤E3中,输出第一肺部医学影像、第三肺部医学影像和/或第四肺部医学影像。
本实施例中,为了验证CT值区间分段的准确性,可以按照用户选择的CT值区间显示病变的体积并且以“渲染”的形式进行形象显示,具体的,基于单一颜色对第一肺部医学 影像进行渲染,以生成第三肺部医学影像,其中,渲染后的颜色深度与CT值正相关;然后基于多种颜色对第一肺部医学影像进行渲染,以生成第四肺部医学影像,其中,不同的CT值通过不同类型的颜色进行渲染;然后输出第一肺部医学影像、第三肺部医学影像和第四肺部医学影像。具体输出后的图片形式可如图3C所示,左侧为受检对象的第一肺部医学影像,在本示例中,该第一肺部医学影像为包含肺部的胸部CT图,中间的断面图中,是对第一肺部医学影像通过一种颜色进行渲染,不同的CT值采用不同的深度,例如,CT值越高,颜色越深。当然,可以理解的是,还可以设置为CT值越高,颜色越浅。右侧的断面图中,是以变化的颜色进行标识。例如,可以设置多个CT值区间,落入CT值低的区间内的区域通过蓝色进行渲染,落入CT值高的区间内的区域通过红色进行渲染。
可以理解的是,在步骤E3中,可以仅输出第一肺部医学影像和第三肺部医学影像、也可以仅输出第一肺部医学影像和第四肺部医学影像,还可以同时输出第一肺部医学影像、第三肺部医学影像和第四肺部医学影像。
在一个实施例中,方法还可被实施为如下步骤F1-F2:
在步骤F1中,通过多种颜色对多个肺部医学影像进行渲染,渲染后的肺部医学影像中不同CT值和/或CT值范围的部分对应于不同的颜色;
在步骤F2中,输出渲染后的多个肺部医学影像。
本实施例中,可以针对同一个病人不同病程的肺部医学影像进行渲染,增强比对效果,例如,通过多种颜色对同一受检对象连续三天的肺部医学影像进行渲染,渲染后的肺部医学影像中不同CT值和/或CT值范围的部分对应于不同的颜色,然后输出渲染后的多个肺部医学影像。从而将黑白色为主的CT图像渲染为彩色图像,从而增强了图像的效果,得到同一受检对象不同病程的渲染后的肺部医学影像,便于对不同病程的病势进行比对。
另外,需要说明的是,针对不同的疾病,可以给出正常肺部CT值与特定疾病肺部CT值分布对照示意图,例如,针对新型冠状病毒性肺炎,可以通过分析大量的健康人群的胸部CT影像,给出正常人群的肺内CT值数据作为基线参考,并绘制直方图,提供健康人群与患者CT值分布的联合交叉口、Hellinger系数等供医生进行对照,具体对照示意图如图3D所示。其中,变化幅度大CT直方图为新型冠状病毒性肺炎对应的直方图,根据该图能够精准快速评估当前新型冠状病毒性肺炎的严重程度。
图4为本发明一实施例中一种基于医学影像的诊断信息处理装置、或显示装置、或交互装置的框图,如图4所示,该装置包括:
第一获取模块41,用于获取受检对象的第一肺部医学影像;
第二获取模块42,用于获取第一肺部医学影像中受累部位的影像参数;
确定模块43,用于根据受累部位的影像参数确定第一肺部医学影像信息对应的受检对象肺部的疾病等级。
在一个实施例中,第二获取模块,包括:
输入子模块,用于将至少一张第一肺部医学影像输入到神经元网络中,以确定出所述第一肺部医学影像中受累部位的体积。
在一个实施例中,所述神经元网络包括:
用于检出候选斑片影的第一检出模型、切割模型、用于检出斑片影区间的第二检出模型以及用于计算受累部位体积的体积计算模型;
输入子模块,用于:
将所述至少一张第一肺部医学影像经由第一检出模型中的N个连续的卷积特征提取模块,以使所述N个连续的卷积特征提取模块得到所述第一肺部医学影像中斑片影的图像特征,其中,N为正整数;
将所述第一肺部医学影像中受累部位的图像特征输入到第一检出模型中的全连接层,以使所述全连接层基于所述图像特征输出候选斑片影;
将所述候选斑片影经由切割模型,以使所述切割模型对所述候选斑片影在空间上进行不同方向的多次切割,得到所述候选斑片影在空间的多个方向上的多个切面影像;
将多张连续的切面影像经由第二检出模型中的M个连续的卷积特征提取模块,以使所述M个连续的卷积特征提取模块得到所述切面影像的图像特征,其中,M为正整数;
将所述切面影像的图像特征输入到第二检出模型中的全连接层,以使所述全连接层基于所述图像特征输出斑片影信息;
将所述斑片影信息经由所述体积计算模型,以使所述体积计算模型计算出所述第一肺部医学影像中受累部位的体积。
在一个实施例中,确定模块,包括:
比对子模块,用于将所述受累部位的体积与目标关系表进行比对,其中,所述目标关系表中存储有受累部位体积与疾病等级的对应关系;
第一确定子模块,用于根据比对结果确定所述受检对象肺部的疾病等级。
在一个实施例中,确定模块,包括:
计算子模块,用于计算受累部位在肺内的体积占比;
输入子模块,用于将所述受累部位的体积和所述受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于所述受累部位体积和所述受累部位在肺内的体积占比综合计算得到的所述受检对象肺部的疾病等级。
在一个实施例中,所述装置还包括:
第三获取模块,用于获取受检对象的第二肺部医学影像;
第四获取模块,用于获取第二肺部医学影像中受累部位的体积;
比对模块,用于将所述第二肺部医学影像中受累部位的体积与所述第一肺部医学影像中受累部位的体积进行比对,以确定受累部位体积变化趋势;
变化趋势确定模块,用于根据所述受累部位体积变化趋势确定所述受检对象肺部疾病的发展趋势信息。
在一个实施例中,变化趋势确定模块,包括:
第二确定子模块,用于当所述受累部位的体积符合第一发展趋势,确定所述受检对象的第一诊断结果;
第三确定子模块,用于当所述受累部位的体积符合第二发展趋势,确定所述受检对象的第二诊断结果。
在一个实施例中,所述装置还包括:
第五获取模块,用于获取第一肺部医学影像和第二肺部医学影像的生成时间;
计算模块,用于根据所述生成时间和受累部位体积变化趋势计算所述受检对象疾病发展速度。
在一个实施例中,装置还包括:
第一渲染模块,用于基于单一颜色对所述第一肺部医学影像进行渲染,以生成第三肺部医学影像,其中,渲染后的颜色深度与CT值正相关;
第二渲染模块,用于基于多种颜色对所述第一肺部医学影像进行渲染,以生成第四肺部医学影像,其中,不同的CT值通过不同类型的颜色进行渲染;
第一输出模块,用于输出所述第一肺部医学影像、第三肺部医学影像和/或第四肺部医学影像。
在一个实施例中,装置还包括:
第三渲染模块,用于通过多种颜色对多个肺部医学影像进行渲染,渲染后的肺部医学影像中不同CT值和/或CT值范围的部分对应于不同的颜色;
第二输出模块,用于输出渲染后的多个肺部医学影像。
图5为本发明一实施例中一种诊断信息界面的显示方法的流程图,如图5所示,该方法可被实施为以下步骤S51-S53:
在步骤S51中,基于第一数据形成第一图形,其中,第一图形以第一颜色表示,第一数据是第一目标CT图像中的感兴趣区域CT值密度数据;
在步骤S52中,基于第二数据形成第二图形;其中,第二图形以第二颜色表示;
在步骤S53中,确定第一图形和第二图形的重叠部位,并通过第三颜色表示重叠部位。
本实施例中,基于第一数据形成第一图形,具体的,该第一数据可以通过如下方式得到:响应于获取第一目标CT图像中的感兴趣区域的CT值密度数据,确定第一数据。然后基于第二数据形成第二图形。
当第一图形和第二图形为CT值密度数据对应的直方图时,如图3D或图6所示,在形成第一图形和第二图形之后,可以将第一图形和第二图形放在同一个坐标系中,以使形成第一目标CT图像和其他数据的CT值概率分布情况的比对图,从而通过第一图形和第二图形能够更加直观地表示第一目标CT图像中疾病的严重程度。本公开涉及的各实施例中的直方图可以基于3D医学影像构建,例如基于三维CT胸部影像中相应部位的各质点构建,因此本公开涉及的直方图可以被定为3D-CT值直方图。
本申请的有益效果在于:基于第一数据形成第一图形,基于第二数据形成第二图形;其中,第一数据是第一目标CT图像中的感兴趣区域CT值密度数据,从而可以将第一数据与第二数据进行比对,当第二数据为CT图像对应部位的正常CT值密度数据时,便于用户基于第一图形和第二图形的比对来判断疾病的严重程度,因此,这样的方案能够使疾病的 严重程度的表现形式更为直观。
在一个实施例中,基于第一数据形成第一图形,包括:
响应于获取第一目标CT图像中的感兴趣区域的CT值密度数据,确定第一数据。
在一个实施例中,第二数据为CT图像中感兴趣区域的基准数据。
本实施例中,第二数据为CT图像中感兴趣区域的基准数据,该基准数据可以是通过医生自行定义的数据作为基准数据,也可以是行业内的标准数据,还可以是正常人的平均数据,例如,假设第一数据是某肺部疾病患者(如新型冠状病毒肺炎患者)的肺部CT值密度数据,那么,第二数据可以是用户自定义的数据,也可以是行业的标准数据,也可以是正常人的平均数据,还可以是该肺部疾病患者其他时间段(如未患疾病之前或痊愈之后)的肺部CT值密度数据。
假设,第二数据时正常人的平均数据,那么,第一图形和第二图形的相似度越越低,说明第一目标CT图像对应的受检对象的疾病的严重程度越高,相似度越高,说明该受检对象疾病的严重程度越低,而当第一图形和第二图形相似度大于某个值时(例如95%),则可以认为该受检对象并未患病或者已经痊愈。
在一个实施例中,第二数据为与第一目标CT图像不同时间获取的第二目标CT图像中的感兴趣区域CT值密度数据。
本实施例中,第二数据为与第一目标CT图像不同时间获取的第二目标CT图像中的感兴趣区域CT值密度数据,例如,针对同一受检对象,获取该受检对象不同时期的CT值密度数据,从而能够更为直观地表现该受检对象肺部疾病的发展趋势。
在一个实施例中,感兴趣区域包括于以下至少一个区域:
人体肺部器官、左肺、右肺右肺上叶、右肺中叶、右肺下叶、左肺上叶以及左肺下叶。
在机器视觉、图像处理领域中,从被处理的图像以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域,称为感兴趣区域,本实施例中,感兴趣区域可以包括于以下至少一个区域:
人体肺部器官、左肺、右肺右肺上叶、右肺中叶、右肺下叶、左肺上叶以及左肺下叶。
例如,可以通过与人体肺部器官完全贴合的形状勾勒出人体肺部器官,例如,图1B中,通过黑色不规则多边形勾勒出的人体肺部器官为感兴趣区域,从而能够使得后续算法能够集中于该感兴趣区域,减少后续处理步骤中的计算量。
图7为本发明一实施例中一种基于医学影像的诊断信息交互方法的流程图,如图7所示,该方法可被实施为以下步骤S71-S73:
在步骤S71中,获取受检对象的第一肺部医学影像;
在步骤S72中,获取第一肺部医学影像中受累部位的影像参数;
在步骤S73中,根据受累部位的影像参数,输出第一肺部医学影像信息对应的受检对象肺部的疾病等级。
应当理解,本公开涉及的实施例的交互方法可以基于必要的诊断信息处理方法,包括确定相应的第一肺部医学影像信息对应的受检对象肺部的疾病等级。
在此需要说明的是,本实施例中所涉及的第一肺部医学影像可以是前述实施例中所涉及的第一目标CT图像。
图8为本发明一实施例中一种基于医疗图像的诊断信息评估方法的流程图,如图8所示,该方法可被实施为以下步骤S81-S84:
在步骤S81中,对医疗图像中的感兴趣区域进行分区,获得至少N个分区,其中N为大于等于2的自然数;
在步骤S82中,至少计算每个分区中的第一征象的体积和第二征象的体积占比;
在步骤S83中,获取第一征象和第二征象体积占比对应分值,并基于分值获取每个分区的分数;
在步骤S84中,根据每个分区的分数,对感兴趣区域进行评估。
本实施例中,对医疗图像中的感兴趣区域进行分区,获得至少N个分区,其中N为大于等于2的自然数;
在机器视觉、图像处理领域中,从被处理的图像以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域,称为感兴趣区域,本实施例中,医疗图像中的感兴趣区域可以是该医学图像中的某个人体器官,例如,当医学图像是胸部CT图像时,感兴趣区域可以是人体肺部器官,勾勒后的感兴趣区域如图1B所示。对医疗图像中的感兴趣区域进行分区可以包括如下两种方式:
方式一
获得感兴趣区域的至少N个分区,感兴趣区域为人体肺部,N个分区为右肺上叶、右肺中叶、右肺下叶、左肺上叶和左肺下叶。
方式二
获得感兴趣区域的至少N个分区,感兴趣区域为人体肺部,N个分区对人体肺部的左右肺,由上至下分为三份后的六个分区。
在获得N分区之后,至少计算每个分区中的第一征象的体积和第二征象的体积占比;
具体的,当要检测的疾病为肺炎时,肺炎在CT图中表现为斑片形式和/或磨玻璃形式,即斑片影和磨玻璃影可以同时存在于肺部CT图中,因此,第一征象可以是指人体肺部CT图的斑片区域,而第二征象可以是指人体肺部CT图的磨玻璃区域。可以理解的是,不同的疾病具有不同的征象,因此,针对不同疾病,要计算的征象不同,即应用本申请所公开的方案,除了可以计算第一征象的体积占比和第二征象的体积占比之外,当反映疾病的征象包括其他征象时,还可以计算其他征象的体积占比,例如,结节、空洞、树芽征、轨道征等等各类已被用于临床诊断实践中用于反映疾病的征象。
获取第一征象和第二征象体积占比对应分值,并基于分值获取每个分区的分数;
根据每个分区的分数,对感兴趣区域进行评估,具体的,可以设置相应的分数阈值,然后基于该分数阈值确定医疗图像对应的受检对象的疾病严重程度。
本申请的有益效果在于:能够对医疗图像中感兴趣区域进行分区,计算每个分区的分数,从而实现对感兴趣区域对应的疾病严重程度的量化处理,进而可以基于量化处理得到 的分数对感兴趣区域疾病严重程度进行评估,实现了基于医疗图像的疾病区域对疾病严重程度进行评估的效果。
在一个实施例中,上述步骤S81可被实施为如下步骤:
获得感兴趣区域的至少N个分区,感兴趣区域为人体肺部,N个分区为右肺上叶、右肺中叶、右肺下叶、左肺上叶和左肺下叶。
人体肺部从结构划分可以分为五个区域,分别是右肺上叶、右肺中叶、右肺下叶、左肺上叶和左肺下叶,因此,本实施例中,可以基于人体分布结构划分进行分区,即N个分区分别为右肺上叶、右肺中叶、右肺下叶、左肺上叶和左肺下叶。
另外,可以理解的是N个分区还可以基于肺段确定,图9为医学影像中人体肺段的分布示意图;,如图9所示,右肺上叶包括:尖段、后段和前段;右肺中叶包括外侧段、内侧段;右肺下叶包括:内侧底段、前底段、外侧底段;而左肺上叶包括:尖后段、前段、上舌段和下舌段;右肺下叶包括:前底段、外侧底段、内侧底段。那么,基于肺段进行分区时,每一个肺段可以作为一个分区。
当然,可以理解的是,这种分区方式是基于肺部医学影像可以显示出来的肺段来进行分区的,有些没有显示出的区域没有在图9中标注出,例如背段等没有显示出来的肺段区域。
在一个实施例中,上述步骤S81还可被实施为如下步骤:
获得感兴趣区域的至少N个分区,感兴趣区域为人体肺部,N个分区对人体肺部的左右肺,由上至下分为三份后的六个分区。
本实施例中,将左肺和右肺分别分为三份,从而形成六个分区。具体如图10所示,通过两条切割线对肺部图像进行分区,使其分为右上、右中、右下、左上、左中和左下共六个分区。
在一个实施例中,第一征象为斑片区域,第二征象为磨玻璃区域。
当要检测的疾病为肺炎时,肺炎在CT图中表现为斑片形式和/或磨玻璃形式,即斑片影和磨玻璃影可以同时存在于肺部CT图中,因此,本实施例中,第一征象可以是指人体肺部CT图的斑片区域,而第二征象可以是指人体肺部CT图的磨玻璃区域。
在一个实施例中,如图11所示,上述步骤S83还可被实施为如下步骤S111-S113:
在步骤S111中,根据第一征象的体积占比分值乘以第一参数获得第一乘积;
在步骤S112中,根据第二征象的体积占比分值乘以第二参数获得第二乘积;
在步骤S113中,确定第一乘积和第二乘积的和值为第一征象和第二征象对应分区的分数。
本实施例中,在获取每个分区的分数时,根据第一征象的体积占比分值乘以第一参数获得第一乘积;根据第二征象的体积占比分值乘以第二参数获得第二乘积;其中,第一征象的体积占比分值可以是第一征象的体积占比乘以一个特定系数得到的分值,可以理解的是,特定系数为1时,第一征象的体积占比分值是第一征象的体积占比本身。同理,第二征象的体积占比分值可以是第二征象的体积占比乘以该特定系数得到的分值。另外,第一 参数可以基于第一征象与目标疾病患病概率的关系来确定;第二参数可以基于第二征象与目标疾病患病概率的关系来确定。
举例而言,假设第一参数为3,第二参数为2,那么分区的分数可以是第一征象体积占比分值×3+第二征象体积占比分值×2。
在一个实施例中,如图12所示,上述步骤S84可被实施为如下步骤S121-S125:
在步骤S121中,设置第一、第二阈值,其中,第二阈值大于第一阈值;
在步骤S122中,将分数分别与第一、第二阈值进行比较;
在步骤S123中,当分数小于第一阈值时确定医疗图像对应的受检对象为轻度肺炎;
在步骤S124中,当分数大于等于第一阈值,且小于第二阈值时确定医疗图像对应的受检对象为中度肺炎;
在步骤S125中,当分数大于或等于第二阈值时确定医疗图像对应的受检对象为重度肺炎。
本实施例中,设置第一、第二阈值,其中,第二阈值大于第一阈值;将分数分别与第一、第二阈值进行比较;当分数小于第一阈值时确定医疗图像对应的受检对象为轻度肺炎;当分数大于等于第一阈值,且小于第二阈值时确定医疗图像对应的受检对象为中度肺炎;当分数大于或等于第二阈值时确定医疗图像对应的受检对象为重度肺炎。
本实施例的有益效果在于:通过设置与分数相关的阈值区间,从而实现对肺炎患者目前所患肺炎的严重程度进行评估。
需要说明的是,本申请中,还可以通过其他方式实现对肺炎严重程度的评估,例如:
设置第一、第二和第三分数区间,其中第一分数区间的最大值小于或等于第二分数区间的最小值,第二分数区间的最大值小于或等于第三分数区间的最小值;判断分数所属的分数区间;根据分数所属的分数区间确定医疗图像对应的受检对象肺炎的严重程度,其中,肺炎根据严重程度分为轻度肺炎、中度肺炎和重度肺炎;当分数所属的分数区间为第一分数区间时,确定医疗图像对应的受检对象为轻度肺炎;当分数所属的分数区间为第二分数区间时,确定医疗图像对应的受检对象为中度肺炎;当分数所属的分数区间为第三分数区间时,确定医疗图像对应的受检对象为重度肺炎。
图13为本发明一实施例中一种基于医学影像的诊断信息评估方法的流程图,如图13所示,该方法可被实施为以下步骤S131-S133:
在步骤S131中,获取受检对象的第一肺部医学影像;
在步骤S132中,获取第一肺部医学影像中受累部位的影像参数;
在步骤S133中,根据受累部位的影像参数,输出第一肺部医学影像信息对应的受检对象肺部的疾病等级。
应当理解,本公开涉及的实施例的交互方法可以基于必要的诊断信息处理方法,包括确定相应的第一肺部医学影像信息对应的受检对象肺部的疾病等级。
在此需要说明的是,本实施例中所涉及的第一肺部医学影像可以是前述实施例中所涉及的医疗图像。
结合前述基于医学影像的诊断信息评估方法,本申请还公开一种基于医学影像的诊断信息显示方法,14为本发明一实施例中一种基于医学影像的诊断信息显示方法的流程图,如图14所示,该方法可被实施为以下步骤S141-S142:
在步骤S141中,通过显示界面对医学影像的分区进行显示;
在步骤S142中,响应于对各分区中第一征象和第二征象的影像参数的计算,在显示界面上输出诊断信息;
诊断信息包括以下至少一项:
第一征象和第二征象的体积占比、基于第一征象和第二征象的体积得到的分数、基于分数得到的医学影像的评估结果。
当医学影像为肺部医学影像时,如图15所示,通过显示界面对肺部医学影像的分区进行显示,图15适用于前述实施例中提到的将医学影像中的感兴趣区域(即肺部)分为五个分区的情况,响应于对各分区中第一征象和第二征象的影像参数的计算,在显示界面上输出一下至少一项诊断信息:第一征象和第二征象的体积占比、基于第一征象和第二征象的体积得到的分数、基于分数得到的医学影像的评估结果。
其中,本实施例是结合前述基于医学影像的诊断信息评估方法所公开的一种基于医学影像的诊断信息显示方法,因此,不难理解的是,本实施例所涉及的医学影像可以是前述实施例所涉及的医疗图像,而本实施例涉及的分区也可以通过前述的基于医学影像的诊断信息评估方法所对应的实施例中所记载的分区方式来确定,本实施例涉及的第一征象可以为斑片区域,第二征象可以为磨玻璃区域。
更进一步地,第一征象和第二征象的体积占比、基于第一征象和第二征象的体积得到的分数、基于分数得到的医学影像的评估结果都可以通过前述的基于医学影像的诊断信息评估方法所对应的实施例中所记载的方案来获取。
通过前述实施例的记载可知,前述实施例中的第一肺部医学影像可以是前述实施例中所涉及的第一目标CT图像、另外,通过前述记载“获取受检对象的第二肺部医学影像,该第二肺部医学影像和前述实施例中的第一肺部医学影像是同一受检对象不同时期的肺部医学影像”以及“所述第二数据为与第一目标CT图像不同时间获取的第二目标CT图像中的感兴趣区域CT值密度数据”也不难看出,第二肺部医学影像也可以是第二目标CT图像。
另外,第一肺部医学影像、第二肺部医学影像、第一目标CT图像以及第二目标CT图像的区别仅在于生成时间不同,且第一肺部医学影像也可以是医疗图像,因此,不难理解的是,医疗图像也可以是第一肺部医学影像、第二肺部医学影像、第一目标CT图像以及第二目标CT图像。
另外,通过图3D和图6也可以看出,“通过多种颜色对多个肺部医学影像进行渲染,渲染后的肺部医学影像中不同CT值和/或CT值范围的部分对应于不同的颜色;输出渲染后的多个肺部医学影像”这样的实施例与“基于第一数据形成第一图形,其中,所述第一图形以第一颜色表示,所述第一数据是第一目标CT图像中的感兴趣区域CT值密度数据; 基于第二数据形成第二图形;其中,所述第二图形以第二颜色表示;确定所述第一图形和第二图形的重叠部位,并通过第三颜色表示所述重叠部位”这样的实施例是相对应的,例如,CT值可以包含CT值密度数据,“渲染后的肺部医学影像中不同CT值和/或CT值范围的部分对应于不同的颜色”对应于“所述第一图形以第一颜色表示、所述第二图形以第二颜色表示、通过第三颜色表示所述重叠部位”。
那么,在此基础上,不难看出,上述方案所涉及的多个方法(如一种基于医学影像的诊断信息处理方法、诊断信息界面的显示方法、基于医学影像的诊断信息交互方法、基于医学影像的诊断信息评估方法、基于医学影像的诊断信息显示方法等)所对应的实施例之间可以相互引用、借鉴以及结合实施。基于此,本申请提供一种将上述实施例结合实施的方案,具体如下:
图16为本发明一实施例中一总的实施例对应的流程图,该方法可被实施为以下步骤S161-S162:
在步骤S161中,获取受检对象的第一肺部医学影像和第二肺部医学影像,其中,所述第一肺部医学影像和第二肺部医学影像在不同时间针对同一受检对象获取的肺部医学影像;
在步骤S162中,针对第一肺部医学影像和/或第二肺部医学影像,确定以下至少一种目标信息:
受检对象肺部的疾病等级、不同时间肺部医学影像CT值概率分布情况、肺部医学影像中感兴趣区域的疾病评估结果、疾病发展趋势信息以及疾病发展速度。
在一个实施例中,受检对象肺部的疾病等级通过以下方式确定:
获取所述第一肺部医学影像中受累部位的影像参数;
根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级。
在一个实施例中,不同时间肺部医学影像CT值概率分布情况通过以下方式确定:
获取所述第一肺部医学影像和第二肺部医学影像中感兴趣区域的CT密度数据;
基于所述CT密度数据生成第一肺部医学影像对应的第一图形以及第二肺部医学影像对应的第二图形;
将所述第一图形和第二图形置于同一坐标系中,以形成不同时间的肺部医学影像中CT值概率分布情况的比对图,其中,所述第一图形以第一颜色表示,所述第二图形以第二颜色表示,所述第一图形和第二图形的重叠部位以第三颜色表示。
在一个实施例中,感兴趣区域的疾病评估结果根据以下方式确定:
对所述第一肺部医学影像和/或第二肺部医学影像中的感兴趣区域进行分区,获得至少N个分区,其中N为大于等于2的自然数;
至少计算每个分区中的第一征象的体积和第二征象的体积占比;
获取所述第一征象和第二征象体积占比对应分值,并基于所述分值获取每个分区的分数;
根据每个分区的分数,对所述感兴趣区域进行评估;
其中,根据每个分区的分数,对所述感兴趣区域进行评估,包括:
设置相应的分数阈值,然后基于所述分数阈值确定医疗图像对应的受检对象的疾病严重程度。
关于该总的实施例中所涉及的以下步骤的具体实现方式在前述实施例中均有相关介绍,因此,参照前述实施例即可,在此不做赘述:
疾病受检对象疾病发展趋势信息;不同发展趋势(第一发展趋势或第二发展趋势)对应不同诊断结果;基于第一肺部医学影像和第二肺部医学影像的生成时间和受累部位体积变化趋势计算所述受检对象疾病发展速度,通过神经元网络获取第一肺部医学影像和第二肺部医学影像的中受累部位的体积,神经元网络的结构。
不难理解的是,通过疾病等级只能判断疾病的严重程度,不能判断疾病是往好的方向发展,还是往更坏的方向发展,也不能判断在排除外界影响的情况下,疾病什么时候可以痊愈,或者什么时候会发展到下一疾病等级,因此,需要多种信息进行综合才能对“疾病是往好的方向发展,还是往更坏的方向发展”、“疾病什么时候可以痊愈,或者什么时候会发展到下一疾病等级”等情况进行较为准确的预测。
因此,通过上述方案,可以获取各种不同类型的目标信息,如疾病等级,不同时间肺部医学影像CT值概率分布情况、肺部医学影像中感兴趣区域的疾病评估结果以及疾病发展趋势信息等,从而可以多角度地确定受检对象肺部疾病的各种情况,有利于对各类肺部疾病(如新型冠状病毒性肺炎)的诊断,提供了更丰富的素材和诊断依据,有助于医师或诊断设备更加准确地对疾病进行诊断。
本申请中,当目标信息为多种时,可以通过多种目标信息综合判断所述疾病的情况。
例如,疾病等级根据严重程度从轻到重依次分为等级1、等级2、等级3和等级4,假设目标信息为疾病等级和疾病发展趋势,根据当前医学影像确定目前的疾病等级为2,而根据疾病发展趋势确定受累部位的体积正在减小,那么,就可以预测疾病在未来一段时间内会发展到等级1;
又例如,目标信息为疾病等级、疾病发展趋势和疾病发展速度,根据当前医学影像确定目前的疾病等级为2,而根据疾病发展趋势确定受累部位的体积正在减小,那么,就可以结合疾病发展速度预测疾病在哪个时间会恢复到等级1。从而得到更为全面地判断结果。
可以理解的是,以上仅为示例,用于更为清楚地对本发明的内容进行描述,而对疾病情况进行综合判断的方案并不仅限于上述两个示例。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图 和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (26)

  1. 一种基于医学影像的诊断信息处理方法,其特征在于,包括:
    获取受检对象的第一肺部医学影像;
    获取所述第一肺部医学影像中受累部位的影像参数;
    根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级。
  2. 如权利要求1所述的方法,其特征在于,所述获取所述第一肺部医学影像中受累部位的影像参数,包括:
    将至少一张第一肺部医学影像输入到神经元网络中,以确定出所述第一肺部医学影像中受累部位的体积。
  3. 如权利要求2所述的方法,其特征在于,所述神经元网络包括:
    用于检出候选斑片影的第一检出模型、切割模型、用于检出斑片影区间的第二检出模型以及用于计算受累部位体积的体积计算模型;
    将至少一张第一肺部医学影像输入到神经元网络中,以确定出所述第一肺部医学影像中受累部位的体积,包括:
    将所述至少一张第一肺部医学影像经由第一检出模型中的N个连续的卷积特征提取模块,以使所述N个连续的卷积特征提取模块得到所述第一肺部医学影像中斑片影的图像特征,其中,N为正整数;
    将所述第一肺部医学影像中受累部位的图像特征输入到第一检出模型中的全连接层,以使所述全连接层基于所述图像特征输出候选斑片影;
    将所述候选斑片影经由切割模型,以使所述切割模型对所述候选斑片影在空间上进行不同方向的多次切割,得到所述候选斑片影在空间的多个方向上的多个切面影像;
    将多张连续的切面影像经由第二检出模型中的M个连续的卷积特征提取模块,以使所述M个连续的卷积特征提取模块得到所述切面影像的图像特征,其中,M为正整数;
    将所述切面影像的图像特征输入到第二检出模型中的全连接层,以使所述全连接层基于所述图像特征输出斑片影信息;
    将所述斑片影信息经由所述体积计算模型,以使所述体积计算模型计算出所述第一肺部医学影像中受累部位的体积。
  4. 如权利要求1所述的方法,其特征在于,根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级,包括:
    将所述受累部位的体积与目标关系表进行比对,其中,所述目标关系表中存储有受累部位体积与疾病等级的对应关系;
    根据比对结果确定所述受检对象肺部的疾病等级。
  5. 如权利要求1所述的方法,其特征在于,根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级,包括:
    计算受累部位在肺内的体积占比;
    将所述受累部位的体积和所述受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于所述受累部位体积和所述受累部位在肺内的体积占比综合计算得到的所述受检对象肺部的疾病等级。
  6. 如权利要求2-5任意一项所述的方法,其特征在于,所述方法还包括:
    获取受检对象的第二肺部医学影像;
    获取第二肺部医学影像中受累部位的体积;
    将所述第二肺部医学影像中受累部位的体积与所述第一肺部医学影像中受累部位的体积进行比对,以确定受累部位体积变化趋势;
    根据所述受累部位体积变化趋势确定所述受检对象肺部疾病的发展趋势信息。
  7. 如权利要求6所述的方法,其特征在于,根据所述受累部位体积变化趋势确定所述受检对象肺部疾病的发展趋势,包括:
    当所述受累部位的体积符合第一发展趋势,确定所述受检对象的第一诊断结果;
    当所述受累部位的体积符合第二发展趋势,确定所述受检对象的第二诊断结果。
  8. 如权利要求6所述的方法,其特征在于,所述方法还包括:
    获取第一肺部医学影像和第二肺部医学影像的生成时间;
    根据所述生成时间和受累部位体积变化趋势计算所述受检对象疾病发展速度。
  9. 如权利要求1所述的方法,其特征在于,方法还包括:
    基于单一颜色对所述第一肺部医学影像进行渲染,以生成第三肺部医学影像,其中,渲染后的颜色深度与CT值正相关;和/或
    基于多种颜色对所述第一肺部医学影像进行渲染,以生成第四肺部医学影像,其中,不同的CT值通过不同类型的颜色进行渲染;
    输出所述第一肺部医学影像、第三肺部医学影像和/或第四肺部医学影像。
  10. 如权利要求1所述的方法,还包括:
    通过多种颜色对多个肺部医学影像进行渲染,渲染后的肺部医学影像中不同CT值和/或CT值范围的部分对应于不同的颜色;
    输出渲染后的多个肺部医学影像。
  11. 一种诊断信息界面的显示方法,包括:
    基于第一数据形成第一图形,其中,所述第一图形以第一颜色表示,所述第一数据是第一目标CT图像中的感兴趣区域CT值密度数据;
    基于第二数据形成第二图形;其中,所述第二图形以第二颜色表示;
    确定所述第一图形和第二图形的重叠部位,并通过第三颜色表示所述重叠部位,其中,所述第一图形和第二图形用于表征所述第一目标CT图像中疾病的严重程度。
  12. 如权利要求11所述的方法,所述基于第一数据形成第一图形,包括:
    响应于获取所述第一目标CT图像中的感兴趣区域的CT值密度数据,确定所述第一数据。
  13. 如权利要求11所述的方法,所述第二数据为CT图像中感兴趣区域的基准数据。
  14. 如权利要求11所述的方法,其特征在于,所述第二数据为与第一目标CT图像不同时间获取的第二目标CT图像中的感兴趣区域CT值密度数据。
  15. 如权利要求13或14所述的方法,其特征在于,所述感兴趣区域包括于以下至少一个区域:
    人体肺部器官、左肺、右肺右肺上叶、右肺中叶、右肺下叶、左肺上叶以及左肺下叶。
  16. 一种基于医学影像的诊断信息交互方法,其特征在于,包括:
    获取受检对象的第一肺部医学影像;
    获取所述第一肺部医学影像中受累部位的影像参数;
    根据所述受累部位的影像参数,输出所述第一肺部医学影像信息对应的受检对象肺部的疾病等级。
  17. 一种基于医学影像的诊断信息评估方法,其特征在于,包括:
    对医疗图像中的感兴趣区域进行分区,获得至少N个分区,其中N为大于等于2的自然数;
    至少计算每个分区中的第一征象的体积和第二征象的体积占比;
    获取所述第一征象和第二征象体积占比对应分值,并基于所述分值获取每个分区的分数;
    根据每个分区的分数,对所述感兴趣区域进行评估;
    其中,根据每个分区的分数,对所述感兴趣区域进行评估,包括:
    设置相应的分数阈值,然后基于所述分数阈值确定医疗图像对应的受检对象的疾病严重程度。
  18. 如权利要求17所述的方法,其特征在于,对医疗图像中的感兴趣区域进行分区, 包括:
    获得所述感兴趣区域的至少N个分区,所述感兴趣区域为人体肺部,所述N个分区为右肺上叶、右肺中叶、右肺下叶、左肺上叶和左肺下叶。
  19. 如权利要求17所述的方法,其特征在于,对医疗图像中的感兴趣区域进行分区,包括:
    获得所述感兴趣区域的至少N个分区,所述感兴趣区域为人体肺部,所述N个分区对人体肺部的左右肺,由上至下分为三份后的六个分区。
  20. 如权利要求16至18中任一项所述的方法,其特征在于,所述第一征象为斑片区域,所述第二征象为磨玻璃区域。
  21. 如权利要求17至19中任一项所述的方法,其特征在于,获取所述第一征象和第二征象体积占比对应分值,并基于所述分值获取每个分区的分数,包括:
    根据第一征象的体积占比分值乘以第一参数获得第一乘积;
    根据第二征象的体积占比分值乘以第二参数获得第二乘积;
    确定第一乘积和第二乘积的和值为所述第一征象和第二征象对应分区的分数。
  22. 如权利要求17所述的方法,其特征在于,所述根据每个分区的分数,对所述感兴趣区域进行评估,包括:
    设置第一、第二阈值,其中所述第二阈值大于所述第一阈值;
    将所述分数分别与所述第一、第二阈值进行比较;
    当所述分数小于第一阈值时确定所述医疗图像对应的受检对象为轻度肺炎;
    当所述分数大于等于第一阈值,且小于第二阈值时确定所述医疗图像对应的受检对象为中度肺炎;
    当所述分数大于或等于第二阈值时确定所述医疗图像对应的受检对象为重度肺炎。
  23. 一种基于医学影像的诊断信息评估方法,其特征在于,包括:
    获取受检对象的第一肺部医学影像;
    获取所述第一肺部医学影像中受累部位的影像参数;
    根据所述受累部位的影像参数,输出所述第一肺部医学影像信息对应的受检对象肺部的疾病等级;
    其中,根据所述受累部位的影像参数,输出所述第一肺部医学影像信息对应的受检对象肺部的疾病等级,包括:
    将所述受累部位的体积与目标关系表进行比对,其中,所述目标关系表中存储有受累部位体积与疾病等级的对应关系;
    根据比对结果确定并输出所述受检对象肺部的疾病等级;
    或者
    计算受累部位在肺内的体积占比;
    将所述受累部位的体积和所述受累部位在肺内的体积占比输入到疾病等级计算模型中,以得到疾病等级计算模型基于所述受累部位体积和所述受累部位在肺内的体积占比综合计算得到的所述受检对象肺部的疾病等级。
  24. 基于医学影像的诊断信息显示方法,包括:
    通过显示界面对医学影像的分区进行显示;
    响应于对各分区中第一征象和第二征象的影像参数的计算,在所述显示界面上输出诊断信息;
    所述诊断信息包括以下至少一项:
    第一征象和第二征象的体积占比、基于第一征象和第二征象的体积得到的分数、基于分数得到的所述医学影像的评估结果。
  25. 一种基于医学影像的诊断信息处理装置,其特征在于,包括:
    第一获取模块,用于获取受检对象的第一肺部医学影像;
    第二获取模块,用于获取所述第一肺部医学影像中受累部位的影像参数;
    确定模块,用于根据所述受累部位的影像参数确定所述第一肺部医学影像信息对应的受检对象肺部的疾病等级。
  26. 一种非临时性可读存储介质,当所述存储介质中的指令由设备内的处理器执行时,使得所述设备能够执行权利要求1-10任意一项所述的方法;
    或者,当所述存储介质中的指令由设备内的处理器执行时,使得所述设备能够执行权利要求11-15任意一项所述的方法;
    或者,当所述存储介质中的指令由设备内的处理器执行时,使得所述设备能够执行权利要求16所述的方法;
    或者,当所述存储介质中的指令由设备内的处理器执行时,使得所述设备能够执行权利要求17-22任意一项所述的方法;
    或者,当所述存储介质中的指令由设备内的处理器执行时,使得所述设备能够执行权利要求23所述的方法;
    或者,当所述存储介质中的指令由设备内的处理器执行时,使得所述设备能够执行权利要求24所述的方法。
PCT/CN2021/075379 2020-02-05 2021-02-05 一种基于医学影像的诊断信息处理方法、装置及存储介质 WO2021155829A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/760,185 US20230070249A1 (en) 2020-02-05 2021-02-05 Medical imaging-based method and device for diagnostic information processing, and storage medium
EP21751295.3A EP4089688A4 (en) 2020-02-05 2021-02-05 MEDICAL IMAGING-BASED METHOD AND DEVICE FOR PROCESSING DIAGNOSTIC INFORMATION, AND STORAGE MEDIA

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202010081111.5 2020-02-05
CN202010081111.5A CN111261284A (zh) 2020-02-05 2020-02-05 一种基于医学影像的诊断信息处理方法、装置及存储介质
CN202010083597.6 2020-02-07
CN202010083597.6A CN111261285A (zh) 2020-02-07 2020-02-07 诊断信息界面的显示方法、交互方法及存储介质
CN202010096657.8 2020-02-17
CN202010096657.8A CN111160812B (zh) 2020-02-17 2020-02-17 诊断信息评估方法、显示方法及存储介质

Publications (1)

Publication Number Publication Date
WO2021155829A1 true WO2021155829A1 (zh) 2021-08-12

Family

ID=77199186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075379 WO2021155829A1 (zh) 2020-02-05 2021-02-05 一种基于医学影像的诊断信息处理方法、装置及存储介质

Country Status (3)

Country Link
US (1) US20230070249A1 (zh)
EP (1) EP4089688A4 (zh)
WO (1) WO2021155829A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635616A (zh) * 2024-01-26 2024-03-01 江西科技学院 用于医学检查结果互认的影像诊断系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220327665A1 (en) * 2021-04-08 2022-10-13 Canon Medical Systems Corporation Neural network for improved performance of medical imaging systems

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106943192A (zh) * 2017-03-14 2017-07-14 上海交通大学医学院附属第九人民医院 肺癌细胞ki‑67表达指数的术前预测模型的建立方法
CN107818821A (zh) * 2016-09-09 2018-03-20 西门子保健有限责任公司 在医学成像中的基于机器学习的组织定征
CN108615237A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种肺部图像处理方法及图像处理设备
CN109600578A (zh) * 2017-09-29 2019-04-09 株式会社理光 图像处理装置、图像处理系统、图像处理方法
CN109712122A (zh) * 2018-12-14 2019-05-03 强联智创(北京)科技有限公司 一种基于头颅ct影像的评分方法及系统
CN110211672A (zh) * 2019-06-14 2019-09-06 杭州依图医疗技术有限公司 用于影像分析的信息显示方法、设备和存储介质
CN111160812A (zh) * 2020-02-17 2020-05-15 杭州依图医疗技术有限公司 诊断信息评估方法、显示方法及存储介质
CN111261285A (zh) * 2020-02-07 2020-06-09 杭州依图医疗技术有限公司 诊断信息界面的显示方法、交互方法及存储介质
CN111261284A (zh) * 2020-02-05 2020-06-09 杭州依图医疗技术有限公司 一种基于医学影像的诊断信息处理方法、装置及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818821A (zh) * 2016-09-09 2018-03-20 西门子保健有限责任公司 在医学成像中的基于机器学习的组织定征
CN106943192A (zh) * 2017-03-14 2017-07-14 上海交通大学医学院附属第九人民医院 肺癌细胞ki‑67表达指数的术前预测模型的建立方法
CN109600578A (zh) * 2017-09-29 2019-04-09 株式会社理光 图像处理装置、图像处理系统、图像处理方法
CN108615237A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种肺部图像处理方法及图像处理设备
CN109712122A (zh) * 2018-12-14 2019-05-03 强联智创(北京)科技有限公司 一种基于头颅ct影像的评分方法及系统
CN110211672A (zh) * 2019-06-14 2019-09-06 杭州依图医疗技术有限公司 用于影像分析的信息显示方法、设备和存储介质
CN111261284A (zh) * 2020-02-05 2020-06-09 杭州依图医疗技术有限公司 一种基于医学影像的诊断信息处理方法、装置及存储介质
CN111261285A (zh) * 2020-02-07 2020-06-09 杭州依图医疗技术有限公司 诊断信息界面的显示方法、交互方法及存储介质
CN111160812A (zh) * 2020-02-17 2020-05-15 杭州依图医疗技术有限公司 诊断信息评估方法、显示方法及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4089688A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635616A (zh) * 2024-01-26 2024-03-01 江西科技学院 用于医学检查结果互认的影像诊断系统

Also Published As

Publication number Publication date
EP4089688A1 (en) 2022-11-16
EP4089688A4 (en) 2023-07-19
US20230070249A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
US11288808B2 (en) System and method for n-dimensional image segmentation using convolutional neural networks
Li et al. Brain tumor detection based on multimodal information fusion and convolutional neural network
WO2021155829A1 (zh) 一种基于医学影像的诊断信息处理方法、装置及存储介质
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
KR20150131018A (ko) 진단 벡터 분류 지원을 위한 시스템 및 방법
CN110189307B (zh) 一种基于多模型融合的肺结节检测方法及系统
US11615508B2 (en) Systems and methods for consistent presentation of medical images using deep neural networks
US20230368515A1 (en) Similarity determination apparatus, similarity determination method, and similarity determination program
Wang et al. A method of ultrasonic image recognition for thyroid papillary carcinoma based on deep convolution neural network
CN111261284A (zh) 一种基于医学影像的诊断信息处理方法、装置及存储介质
KR20220108158A (ko) 병변으로 덮인 체표면적 백분율에 기초한 피부 질환의 중증도 결정 방법
Zhang et al. Lesion synthesis to improve intracranial hemorrhage detection and classification for CT images
Sun et al. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface
Wang et al. A segmentation and classification scheme for single tooth in MicroCT images based on 3D level set and k-means++
CN114450716A (zh) 用于卒中表征的图像处理
CN111160812B (zh) 诊断信息评估方法、显示方法及存储介质
Hu et al. Deep learning-based automatic endometrium segmentation and thickness measurement for 2D transvaginal ultrasound
CN112381822B (zh) 一种用于处理肺部病灶区图像的方法和相关产品
Malafaia et al. Robustness analysis of deep learning-based lung cancer classification using explainable methods
Wang et al. Accuracy and reliability analysis of a machine learning based segmentation tool for intertrochanteric femoral fracture CT
CN111261285A (zh) 诊断信息界面的显示方法、交互方法及存储介质
Li et al. Detectability of pulmonary nodules by deep learning: Results from a phantom study
Chen et al. Computer-aided liver surgical planning system using CT volumes
Malinda et al. Lumbar vertebrae synthetic segmentation in computed tomography images using hybrid deep generative adversarial networks
Suji et al. A survey and taxonomy of 2.5 D approaches for lung segmentation and nodule detection in CT images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21751295

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021751295

Country of ref document: EP

Effective date: 20220810

NENP Non-entry into the national phase

Ref country code: DE