CN111261285A - Display method, interaction method and storage medium of diagnostic information interface - Google Patents

Display method, interaction method and storage medium of diagnostic information interface Download PDF

Info

Publication number
CN111261285A
CN111261285A CN202010083597.6A CN202010083597A CN111261285A CN 111261285 A CN111261285 A CN 111261285A CN 202010083597 A CN202010083597 A CN 202010083597A CN 111261285 A CN111261285 A CN 111261285A
Authority
CN
China
Prior art keywords
lung
image
affected part
medical image
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010083597.6A
Other languages
Chinese (zh)
Inventor
石磊
臧璇
史晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yitu Medical Technology Co ltd
Original Assignee
Hangzhou Yitu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yitu Medical Technology Co ltd filed Critical Hangzhou Yitu Medical Technology Co ltd
Priority to CN202010083597.6A priority Critical patent/CN111261285A/en
Publication of CN111261285A publication Critical patent/CN111261285A/en
Priority to EP21751295.3A priority patent/EP4089688A4/en
Priority to PCT/CN2021/075379 priority patent/WO2021155829A1/en
Priority to US17/760,185 priority patent/US20230070249A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention mainly discloses a display method of a diagnostic information interface, which is used for enabling the expression form of the severity of diseases to be more visual. The method comprises the following steps: forming a first graph based on first data, wherein the first graph is represented by a first color, and the first data is CT value density data of a region of interest in a first target CT image; forming a second graph based on the second data; wherein the second graphic is represented in a second color; and determining the overlapping part of the first graph and the second graph, and representing the overlapping part by a third color. By adopting the scheme provided by the invention, the user can conveniently judge the severity of the disease based on the comparison of the first graph and the second graph, so that the scheme can enable the expression form of the severity of the disease to be more intuitive.

Description

Display method, interaction method and storage medium of diagnostic information interface
Technical Field
The invention relates to the field of computers, in particular to a display method of a diagnosis information interface, a diagnosis information interaction method based on medical images and a storage medium.
Background
At present, many diseases can be seen through a CT (Computed Tomography) image, specifically, a CT image of a part to be examined of an object to be examined is generated by a CT machine, and then a doctor determines whether a disease exists by observing the CT image.
The CT map can only identify the location of the lesion and the distribution of the affected part by different CT values, and is not intuitive in representing the severity of the disease since no other data (e.g., the normal CT value of the part where the CT is located) is compared with the CT map. Therefore, how to provide a display method of a diagnostic information interface is an urgent technical problem to be solved, which can compare parameters in a CT image with other parameters, and further make the expression form of the severity of a disease more intuitive.
Disclosure of Invention
The invention provides a display method of a diagnostic information interface, which is used for enabling the expression form of the severity of diseases to be more visual.
The invention provides a display method of a diagnostic information interface, which comprises the following steps:
forming a first graph based on first data, wherein the first graph is represented by a first color, and the first data is CT value density data of a region of interest in a first target CT image;
forming a second graph based on the second data; wherein the second graphic is represented in a second color;
and determining the overlapping part of the first graph and the second graph, and representing the overlapping part by a third color.
The beneficial effect of this application lies in: forming a first pattern based on the first data and a second pattern based on the second data; the first data is CT value density data of a region of interest in the first target CT image, so that the first data and the second data can be compared, and when the second data is normal CT value density data of a corresponding part of the CT image, a user can conveniently judge the severity of a disease based on the comparison of the first graph and the second graph, and therefore the scheme can enable the expression form of the severity of the disease to be more visual.
In one embodiment, said forming a first pattern based on the first data comprises:
the first data is determined in response to acquiring CT value density data for a region of interest in the first target CT image.
In one embodiment, the second data is reference data of a region of interest in a CT image.
In one embodiment, the second data is region of interest CT value density data in a second target CT image acquired at a different time than the first target CT image.
In one embodiment, the region of interest is comprised in at least one of:
human pulmonary organs, left lung, right lung, upper right lobe of the right lung, middle right lobe of the right lung, lower right lobe of the right lung, upper left lobe of the left lung, and lower left lobe of the left lung.
The invention also provides a diagnostic information interaction method based on the medical image, which comprises the following steps:
acquiring a first lung medical image of a detected object;
acquiring image parameters of an affected part in the first lung medical image;
and outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part.
The invention has the beneficial effects that: the image parameters of the affected part in the first lung medical image can be acquired, and then the disease grade of the lung of the detected object corresponding to the first lung medical image information is output according to the image parameters of the affected part, so that the disease can be classified based on the medical image.
In one embodiment, the acquiring image parameters of the affected part in the first pulmonary medical image includes:
acquiring a normal CT value distribution interval and an affected part CT value distribution interval in the lung;
at least one first pulmonary medical image is input into the neural network to determine the volume of the affected part in the first pulmonary medical image.
In one embodiment, the neural network comprises:
the device comprises a first detection model for detecting candidate patch shadows, a cutting model, a second detection model for detecting patch shadow intervals and a volume calculation model for calculating the volume of an affected part;
inputting at least one first pulmonary medical image into a neural network to determine a volume of an affected site in the first pulmonary medical image, comprising:
enabling the at least one first lung medical image to pass through N continuous convolution feature extraction modules in a first detection model, so that the N continuous convolution feature extraction modules obtain image features of patch images in the first lung medical image, wherein N is a positive integer;
inputting image features of an affected part in the first lung medical image into a full-link layer in a first detection model, so that the full-link layer outputs candidate patch images based on the image features;
enabling the candidate patch image to pass through a cutting model so that the cutting model performs multiple cutting on the candidate patch image in different directions in space to obtain multiple section images of the candidate patch image in multiple directions in space;
enabling a plurality of continuous section images to pass through M continuous convolution feature extraction modules in a second detection model, so that the M continuous convolution feature extraction modules obtain the image features of the section images, wherein M is a positive integer;
inputting the image characteristics of the section image into a full-connection layer in a second detection model so that the full-connection layer outputs patch image information based on the image characteristics;
and passing the patch image information through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
The beneficial effect of this embodiment lies in: the neuron network formed by connecting a plurality of models can simultaneously realize the detection of the patch shadow and the volume calculation, thereby simplifying the method for determining the volume of the affected part.
In one embodiment, outputting a disease level of a lung of the subject corresponding to the first pulmonary medical image information according to the image parameters of the affected part comprises:
comparing the volume of the affected part with a target relation table, wherein the target relation table stores the corresponding relation between the volume of the affected part and the disease grade;
and determining and outputting the disease grade of the lung of the detected object according to the comparison result.
In one embodiment, outputting a disease level of a lung of the subject corresponding to the first pulmonary medical image information according to the image parameters of the affected part comprises:
calculating the volume ratio of the affected part in the lung;
inputting the volume of the affected part and the volume ratio of the affected part in the lung into a disease grade calculation model to obtain a disease grade of the lung of the detected object, which is obtained by the disease grade calculation model based on the comprehensive calculation of the volume of the affected part and the volume ratio of the affected part in the lung.
In one embodiment, the method further comprises:
acquiring a second lung medical image of the detected object;
acquiring the volume of the affected part in the second lung medical image;
comparing the volume of the affected part in the second pulmonary medical image with the volume of the affected part in the first pulmonary medical image to determine the volume change trend of the affected part;
and determining the development trend information of the lung diseases of the detected object according to the volume change trend of the affected part.
The beneficial effect of this embodiment lies in: the volume change trend of the affected part can be judged based on different lung medical images of the same examined object, so that the development trend information of the lung diseases of the examined object is automatically determined according to the volume change trend of the affected part.
In one embodiment, determining the trend of the lung disease of the subject according to the trend of the volume change of the affected part comprises the following steps:
when the volume of the affected part accords with a first development trend, determining a first diagnosis result of the detected object;
and when the volume of the affected part accords with a second development trend, determining a second diagnosis result of the detected object.
In one embodiment, the method further comprises:
acquiring the generation time of the first lung medical image and the second lung medical image;
and calculating the disease development speed of the detected object according to the generation time and the volume change trend of the affected part.
In one embodiment, the method further comprises:
rendering the first pulmonary medical image based on a single color to generate a third pulmonary medical image, wherein the rendered color depth is positively correlated with the CT value; and/or
Rendering the first pulmonary medical image based on a plurality of colors to generate a fourth pulmonary medical image, wherein different CT values are rendered by different types of colors;
outputting the first, third and/or fourth pulmonary medical images.
In one embodiment, the method further comprises:
rendering the plurality of lung medical images through a plurality of colors, wherein parts of different CT values and/or CT value ranges in the rendered lung medical images correspond to different colors;
and outputting the rendered plurality of lung medical images.
The invention also provides a diagnostic information interaction device based on the medical image, which comprises:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first lung medical image of a detected object;
the second acquisition module is used for acquiring image parameters of an affected part in the first lung medical image;
and the determining module is used for outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part.
In one embodiment, the second obtaining module includes:
and the input submodule is used for inputting at least one first lung medical image into the neural network so as to determine the volume of the affected part in the first lung medical image.
In one embodiment, the neural network comprises:
the device comprises a first detection model for detecting candidate patch shadows, a cutting model, a second detection model for detecting patch shadow intervals and a volume calculation model for calculating the volume of an affected part;
an input submodule for:
enabling the at least one first lung medical image to pass through N continuous convolution feature extraction modules in a first detection model, so that the N continuous convolution feature extraction modules obtain image features of patch images in the first lung medical image, wherein N is a positive integer;
inputting image features of an affected part in the first lung medical image into a full-link layer in a first detection model, so that the full-link layer outputs candidate patch images based on the image features;
enabling the candidate patch image to pass through a cutting model so that the cutting model performs multiple cutting on the candidate patch image in different directions in space to obtain multiple section images of the candidate patch image in multiple directions in space;
enabling a plurality of continuous section images to pass through M continuous convolution feature extraction modules in a second detection model, so that the M continuous convolution feature extraction modules obtain the image features of the section images, wherein M is a positive integer;
inputting the image characteristics of the section image into a full-connection layer in a second detection model so that the full-connection layer outputs patch image information based on the image characteristics;
and passing the patch image information through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
In one embodiment, the determining module includes:
the comparison submodule is used for comparing the volume of the affected part with a target relation table, wherein the target relation table stores the corresponding relation between the volume of the affected part and the disease grade;
and the first determining submodule is used for determining and outputting the disease grade of the lung of the detected object according to the comparison result.
In one embodiment, the determining module includes:
the calculation submodule is used for calculating the volume proportion of the affected part in the lung;
and the input submodule is used for inputting the volume of the affected part and the volume ratio of the affected part in the lung into a disease grade calculation model so as to obtain a disease grade of the lung of the detected object, which is obtained by the disease grade calculation model based on comprehensive calculation of the volume of the affected part and the volume ratio of the affected part in the lung.
In one embodiment, the apparatus further comprises:
the third acquisition module is used for acquiring a second lung medical image of the detected object;
a fourth obtaining module, configured to obtain a volume of an affected part in the second pulmonary medical image;
the comparison module is used for comparing the volume of the affected part in the second pulmonary medical image with the volume of the affected part in the first pulmonary medical image so as to determine the volume change trend of the affected part;
and the change trend determining module is used for determining the development trend information of the lung diseases of the detected object according to the volume change trend of the affected part.
In one embodiment, the trend of change determination module includes:
the second determination submodule is used for determining a first diagnosis result of the detected object when the volume of the affected part accords with the first development trend;
and the third determining submodule is used for determining a second diagnosis result of the detected object when the volume of the affected part accords with a second development trend.
In one embodiment, the apparatus further comprises:
a fifth acquiring module, configured to acquire generation times of the first pulmonary medical image and the second pulmonary medical image;
and the calculation module is used for calculating the disease development speed of the detected object according to the generation time and the volume change trend of the affected part.
In one embodiment, the apparatus further comprises:
a first rendering module, configured to render the first pulmonary medical image based on a single color to generate a third pulmonary medical image, where a rendered color depth is positively correlated with a CT value;
a second rendering module, configured to render the first pulmonary medical image based on multiple colors to generate a fourth pulmonary medical image, where different CT values are rendered by different types of colors;
a first output module for outputting the first, third and/or fourth pulmonary medical images.
In one embodiment, the apparatus further comprises:
the third rendering module is used for rendering the plurality of lung medical images through a plurality of colors, and parts of different CT values and/or CT value ranges in the rendered lung medical images correspond to different colors;
and the second output module is used for outputting the rendered plurality of lung medical images.
The present invention also provides a non-transitory readable storage medium, in which instructions, when executed by a processor in a device, enable the device to perform a medical image-based diagnostic information interaction method, the method comprising:
acquiring a first lung medical image of a detected object;
acquiring image parameters of an affected part in the first lung medical image;
and outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part.
The instructions in the storage medium may be further executable to:
the acquiring of the image parameters of the affected part in the first pulmonary medical image includes:
at least one first pulmonary medical image is input into the neural network to determine the volume of the affected part in the first pulmonary medical image.
The instructions in the storage medium may be further executable to:
the neuron network includes:
the device comprises a first detection model for detecting candidate patch shadows, a cutting model, a second detection model for detecting patch shadow intervals and a volume calculation model for calculating the volume of an affected part;
inputting at least one first pulmonary medical image into a neural network to determine a volume of an affected site in the first pulmonary medical image, comprising:
enabling the at least one first lung medical image to pass through N continuous convolution feature extraction modules in a first detection model, so that the N continuous convolution feature extraction modules obtain image features of patch images in the first lung medical image, wherein N is a positive integer;
inputting image features of an affected part in the first lung medical image into a full-link layer in a first detection model, so that the full-link layer outputs candidate patch images based on the image features;
enabling the candidate patch image to pass through a cutting model so that the cutting model performs multiple cutting on the candidate patch image in different directions in space to obtain multiple section images of the candidate patch image in multiple directions in space;
enabling a plurality of continuous section images to pass through M continuous convolution feature extraction modules in a second detection model, so that the M continuous convolution feature extraction modules obtain the image features of the section images, wherein M is a positive integer;
inputting the image characteristics of the section image into a full-connection layer in a second detection model so that the full-connection layer outputs patch image information based on the image characteristics;
and passing the patch image information through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
The instructions in the storage medium may be further executable to:
outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part, wherein the step comprises the following steps:
comparing the volume of the affected part with a target relation table, wherein the target relation table stores the corresponding relation between the volume of the affected part and the disease grade;
and determining and outputting the disease grade of the lung of the detected object according to the comparison result.
The instructions in the storage medium may be further executable to:
outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part, wherein the step comprises the following steps:
calculating the volume ratio of the affected part in the lung;
inputting the volume of the affected part and the volume ratio of the affected part in the lung into a disease grade calculation model to obtain a disease grade of the lung of the detected object, which is obtained by the disease grade calculation model based on the comprehensive calculation of the volume of the affected part and the volume ratio of the affected part in the lung.
The instructions in the storage medium may be further executable to:
acquiring a second lung medical image of the detected object;
acquiring the volume of the affected part in the second lung medical image;
comparing the volume of the affected part in the second pulmonary medical image with the volume of the affected part in the first pulmonary medical image to determine the volume change trend of the affected part;
and determining the development trend information of the lung diseases of the detected object according to the volume change trend of the affected part.
The instructions in the storage medium may be further executable to:
determining the development trend of the lung diseases of the detected object according to the volume change trend of the affected part, comprising the following steps:
when the volume of the affected part accords with a first development trend, determining a first diagnosis result of the detected object;
and when the volume of the affected part accords with a second development trend, determining a second diagnosis result of the detected object.
The instructions in the storage medium may be further executable to:
acquiring the generation time of the first lung medical image and the second lung medical image;
and calculating the disease development speed of the detected object according to the generation time and the volume change trend of the affected part.
The instructions in the storage medium may be further executable to:
rendering the first pulmonary medical image based on a single color to generate a third pulmonary medical image, wherein the rendered color depth is positively correlated with the CT value; and/or
Rendering the first pulmonary medical image based on a plurality of colors to generate a fourth pulmonary medical image, wherein different CT values are rendered by different types of colors;
outputting the first, third and/or fourth pulmonary medical images.
The instructions in the storage medium may be further executable to:
rendering the plurality of lung medical images through a plurality of colors, wherein parts of different CT values and/or CT value ranges in the rendered lung medical images correspond to different colors;
and outputting the rendered plurality of lung medical images.
The present invention also provides a non-transitory readable storage medium having instructions that, when executed by a processor within a device, enable the device to perform a method of displaying a diagnostic information interface, the method comprising:
forming a first graph based on first data, wherein the first graph is represented by a first color, and the first data is CT value density data of a region of interest in a first target CT image;
forming a second graph based on the second data; wherein the second graphic is represented in a second color;
and determining the overlapping part of the first graph and the second graph, and representing the overlapping part by a third color.
The instructions in the storage medium may be further executable to:
the forming a first pattern based on the first data includes:
the first data is determined in response to acquiring CT value density data for a region of interest in the first target CT image.
The second data is reference data of a region of interest in the CT image.
The second data is region-of-interest CT value density data in a second target CT image acquired at a different time than the first target CT image.
The region of interest is comprised in at least one of:
human pulmonary organs, left lung, right lung, upper right lobe of the right lung, middle right lobe of the right lung, lower right lobe of the right lung, upper left lobe of the left lung, and lower left lobe of the left lung.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1A is a flowchart illustrating a method for displaying a diagnostic information interface according to an embodiment of the present invention;
FIG. 1B is a schematic diagram of a human lung organ as an interesting region, or a schematic diagram of a lung region in a medical image labeled by a dividing line;
FIG. 2 is a flowchart of a method for medical image-based diagnostic information interaction according to another embodiment of the present invention;
FIG. 3A is a flowchart of a method for medical image-based diagnostic information interaction according to another embodiment of the present invention;
FIG. 3B is a schematic interface diagram of a system for implementing aspects of the present invention.
FIG. 4A is a flowchart of a method for medical image-based diagnostic information interaction according to another embodiment of the present invention;
FIG. 4B is a schematic diagram showing the evaluation of the development trend of the new type of coronavirus pneumonia in different disease courses;
fig. 4C is a comparison graph of the first pulmonary medical image and the pulmonary medical images rendered in different manners;
FIG. 4D is a graph showing the distribution of CT values in normal lung versus specific disease lung;
FIG. 4E is a schematic illustration of a comparison comprising a first graphic and a second graphic;
fig. 5 is a block diagram of a medical image-based diagnostic information interaction device according to an embodiment of the invention.
Detailed Description
Various aspects and features of the present application are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the application.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present application will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application of unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the application.
Fig. 1A is a flowchart illustrating a method for displaying a diagnostic information interface according to an embodiment of the present invention, and as shown in fig. 1A, the method may be implemented as the following steps S11-S13:
in step S11, forming a first graph based on first data, wherein the first graph is represented in a first color, and the first data is CT value density data of the region of interest in the first target CT image;
in step S12, a second pattern is formed based on the second data; wherein the second graph is represented in a second color;
in step S13, an overlapping portion of the first graphic and the second graphic is determined, and the overlapping portion is represented by a third color.
In this embodiment, the first pattern is formed based on the first data, specifically, the first data may be obtained as follows: the first data is determined in response to acquiring CT value density data for a region of interest in a first target CT image. A second pattern is then formed based on the second data.
When the first graph and the second graph are histograms corresponding to CT value density data, as shown in fig. 4D or fig. 4E, after the first graph and the second graph are formed, the first graph and the second graph may be placed in the same coordinate system, so that a comparison graph of the CT value probability distribution of the first target CT image and other data is formed, and thus the severity of a disease in the first target CT image can be more intuitively represented by the first graph and the second graph. The histogram in the embodiments related to the present disclosure may be constructed based on a 3D medical image, for example, based on each particle of a corresponding part in a three-dimensional CT chest image, and thus the histogram related to the present disclosure may be defined as a 3D-CT value histogram.
The beneficial effect of this application lies in: forming a first pattern based on the first data and a second pattern based on the second data; the first data is CT value density data of the region of interest in the first target CT image, so that the first data and the second data can be compared, and when the second data is normal CT value density data of the corresponding part of the CT image, a user can conveniently judge the severity of the disease based on the comparison of the first graph and the second graph, and therefore the scheme can enable the expression form of the severity of the disease to be more visual.
In one embodiment, forming the first graphic based on the first data includes:
the first data is determined in response to acquiring CT value density data for a region of interest in a first target CT image.
In one embodiment, the second data is reference data for a region of interest in the CT image.
In this embodiment, the second data is reference data of a region of interest in the CT image, where the reference data may be data defined by a doctor as the reference data, or standard data in the industry, or average data of normal persons, for example, if the first data is lung CT value density data of a certain lung disease patient (such as a new coronavirus pneumonia patient), the second data may be user-defined data, or standard data in the industry, or average data of normal persons, or lung CT value density data of the lung disease patient in other time periods (such as before a disease is not suffered or after a disease is fully cured).
Assuming that the second data is average data of normal persons, the lower the similarity between the first graph and the second graph is, the higher the severity of the disease of the subject corresponding to the first target CT image is, the higher the similarity is, the lower the severity of the disease of the subject is, and when the similarity between the first graph and the second graph is greater than a certain value (e.g., 95%), the subject may be considered to be not diseased or to have healed.
In one embodiment, the second data is region of interest CT value density data in a second target CT image acquired at a different time than the first target CT image.
In this embodiment, the second data is CT value density data of a region of interest in a second target CT image acquired at a different time from the first target CT image, for example, CT value density data of the same subject in different periods are acquired, so that a development trend of a lung disease of the subject can be more intuitively represented.
In one embodiment, the region of interest is included in at least one of:
human pulmonary organs, left lung, right lung, upper right lobe of the right lung, middle right lobe of the right lung, lower right lobe of the right lung, upper left lobe of the left lung, and lower left lobe of the left lung.
In the field of machine vision and image processing, a region to be processed, called a region of interest, is delineated from a processed image in a manner of a box, a circle, an ellipse, an irregular polygon, and the like, and in this embodiment, the region of interest may be included in at least one of the following regions:
human pulmonary organs, left lung, right lung, upper right lobe of the right lung, middle right lobe of the right lung, lower right lobe of the right lung, upper left lobe of the left lung, and lower left lobe of the left lung.
For example, the human lung organ can be outlined by a shape completely fitting the human lung organ, for example, in fig. 1B, the human lung organ outlined by the black irregular polygon is the region of interest, so that the subsequent algorithm can be focused on the region of interest, and the calculation amount in the subsequent processing step is reduced.
Fig. 2 is a flowchart illustrating a method for medical image-based diagnostic information interaction according to an embodiment of the present invention, and as shown in fig. 2, the method may be implemented as the following steps S21-S23:
in step S21, acquiring a first lung medical image of the subject;
in step S22, acquiring image parameters of an affected part in the first lung medical image;
in step S23, a disease level of the lung of the subject corresponding to the first lung medical image information is output according to the image parameter of the affected part. It should be understood that the interaction method of the embodiments related to the present disclosure may be based on a necessary diagnostic information processing method, including determining a disease level of the lung of the subject corresponding to the corresponding first pulmonary medical image information.
In the embodiment, a first lung medical image of a detected object is obtained; the first medical image of the lung may be a CT image of the chest of the subject, in which the lung region has been marked, and may be implemented by manual marking. Of course, before the step S21, a step of segmenting the lung region may be further included, specifically, the chest medical image is input into a pre-trained neural network for segmenting the lung region, so as to identify and label the lung region in the chest medical image through the neural network, specifically, after the lung is identified through the neural network, the lung is labeled through a segmentation line, as shown in fig. 1B, and the lung is labeled through a black segmentation line, it is understood that the segmentation line may be of another color, and through the segmentation step, the labeling of the lung region in the chest medical image may be implemented, so as to obtain the first lung medical image, and of course, the segmentation step may also enable the user to verify the accuracy of the segmentation result.
The CT value of the affected part area in the medical image is different from the CT value of the normal lung area. In the medical field, affected refers to a change in function or organic function of an organ or tissue of a certain part caused by a disease, and affected parts refer to parts where a change in function or organic function occurs caused by a disease. The CT value of the affected part area in the medical image is different from the CT value of the normal lung area. In the medical field, affected refers to a change in function or organic function of an organ or tissue of a certain part caused by a disease, and affected parts refer to parts where a change in function or organic function occurs caused by a disease. In the clinic, CT chest images can be displayed via images of affected sites, characterizing the corresponding lesion sites, such as the lungs infected with a coronavirus, e.g., a novel coronavirus, 2019-nCoV virus, etc. As will be appreciated from the detailed description below, the present application may be specifically detailed for lesion information processing, lesion image display, and output of corresponding diagnostic information on all lobes contained within the lung.
The image parameters of the affected part in the first pulmonary medical image are obtained, specifically, at least one first pulmonary medical image may be input into the neural network to determine the image parameters of the affected part in the first pulmonary medical image, and in general, the image parameters include the volume of the affected part.
Determining a disease grade of the lung of the object to be examined corresponding to the first lung medical image information according to the image parameters of the affected part, specifically, determining the disease grade of the lung of the object to be examined corresponding to the first lung medical image information by the following method:
in a first mode
A relation table is created in advance, and the relation table comprises the corresponding relation between the affected part volume and the disease grade. The volume of the affected part can be compared with a target relation table, wherein the target relation table stores the corresponding relation between the volume of the affected part and the disease grade; and determining and outputting the disease grade of the lung of the detected object according to the comparison result.
Mode two
Calculating the volume ratio of the affected part in the lung; and inputting the volume of the affected part and the volume ratio of the affected part in the lung into a disease grade calculation model to obtain the disease grade of the lung of the detected object, which is comprehensively calculated by the disease grade calculation model based on the volume of the affected part and the volume ratio of the affected part in the lung.
It should be noted that the first medical image of the lung in this embodiment may be the first target CT image in the foregoing embodiment.
The invention has the beneficial effects that: the image parameters of the affected part in the first lung medical image can be acquired, and then the disease grade of the lung of the detected object corresponding to the first lung medical image information is determined according to the image parameters of the affected part, so that the disease can be graded based on the medical image.
In one embodiment, the step S22 can be implemented as the following steps:
at least one first pulmonary medical image is input into the neural network to determine the volume of the affected part in the first pulmonary medical image.
In one embodiment, a neural network comprises:
the device comprises a first detection model for detecting candidate patch shadows, a cutting model, a second detection model for detecting patch shadow intervals and a volume calculation model for calculating the volume of an affected part;
the above step of inputting the normal CT value distribution interval in the lung, the CT value distribution interval of the affected part and at least one first pulmonary medical image into the neural network to determine the volume of the affected part in the first pulmonary medical image can be implemented as the following steps a1-a 6:
in step a1, passing at least one first lung medical image through N consecutive convolution feature extraction modules in the first detection model, so that the N consecutive convolution feature extraction modules obtain image features of patch images in the first lung medical image, where N is a positive integer;
in step a2, inputting image features of an affected part in the first lung medical image into a full-link layer in the first detection model, so that the full-link layer outputs candidate patch images based on the image features;
in step a3, the candidate patch image is cut by a cutting model for multiple times in different directions in space to obtain multiple section images of the candidate patch image in multiple directions in space;
in step a4, passing a plurality of consecutive slice images through M consecutive convolution feature extraction modules in the second detection model, so that the M consecutive convolution feature extraction modules obtain image features of the slice images, where M is a positive integer;
in step a5, inputting the image features of the slice image into the full-link layer in the second detection model, so that the full-link layer outputs patch image information based on the image features;
in step a6, the patch image information is processed through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
In this embodiment, the neural network is formed by connecting a plurality of models, wherein the neural network includes a first detection model for detecting candidate patch shadows, a cutting model, a second detection model for detecting patch shadow intervals, and a volume calculation model for calculating the volume of the affected part.
The first detection model comprises an input layer, N continuous convolution feature extraction modules, a full connection layer and an output layer, wherein each convolution feature extraction module comprises a plurality of convolution modules, and each convolution module comprises a convolution layer, a BN layer and an excitation layer.
The second detection model and the first detection model have the same structure, and are not described herein again.
When at least one first lung medical image passes through N continuous convolution feature extraction modules in the first detection model, aiming at any three continuous convolution feature extraction modules in the N convolution feature extraction modules, the image features output by the first convolution feature extraction module and the second convolution feature extraction module are added to be used as the input of a third convolution feature extraction module. Similarly, when a plurality of continuous section images pass through M continuous convolution feature extraction modules in the second detection model, for any three continuous convolution feature extraction modules in the M convolution feature extraction blocks, the image features output by the first convolution feature extraction module and the second convolution feature extraction module are added to be used as the input of the third convolution feature extraction block.
In addition, in the above steps, the number M of the convolution feature extraction modules in the second detection model may be equal to the number N of the convolution feature extraction modules in the first detection model, or may not be equal to N.
The beneficial effect of this embodiment lies in: the neuron network formed by connecting a plurality of models can simultaneously realize the detection of the patch shadow and the volume calculation, thereby simplifying the method for determining the volume of the affected part.
In one embodiment, as shown in FIG. 3A, the above step S23 can be implemented as the following steps S31-S32:
in step S31, comparing the volume of the affected part with a target relationship table, wherein the target relationship table stores the corresponding relationship between the volume of the affected part and the disease level;
in step S32, a disease level of the lung of the subject is determined and output according to the comparison result.
In this embodiment, a relationship table is created in advance, and the relationship table includes a correspondence between the affected part volume and the disease level. The volume of the affected part can be compared with a target relation table, wherein the target relation table stores the corresponding relation between the volume of the affected part and the disease grade; and determining and outputting the disease grade of the lung of the detected object according to the comparison result.
In one embodiment, the above step S23 can be implemented as the following steps B1-B2:
in step B1, calculating the volume fraction of the affected site in the lung;
in step B2, the volume of the affected part and the volume ratio of the affected part in the lung are inputted into the disease grade calculation model to obtain the disease grade of the lung of the subject, which is obtained by the disease grade calculation model based on the comprehensive calculation of the volume of the affected part and the volume ratio of the affected part in the lung.
In this embodiment, the volume fraction of the affected part in the lung is calculated; and inputting the volume of the affected part and the volume ratio of the affected part in the lung into a disease grade calculation model to obtain the disease grade of the lung of the detected object, which is comprehensively calculated by the disease grade calculation model based on the volume of the affected part and the volume ratio of the affected part in the lung.
In this embodiment, the volume ratio of the specific affected part in the lung may also be calculated by a pre-trained volume ratio calculation model, and after the medical image is input into the volume ratio calculation model, the model may automatically give the volume ratio of each CT interval, fig. 3B is an interface schematic diagram of a system for executing the scheme provided by the present invention, and as shown in fig. 3B, the volume reality of the affected area calculated by the volume ratio calculation model and the two lung volume analysis columns of the interface schematic diagram are shown.
In one embodiment, as shown in FIG. 4A, the method may also be implemented as steps S41-S44 as follows:
in step S41, acquiring a second lung medical image of the subject;
in step S42, acquiring a volume of the affected part in the second lung medical image;
in step S43, comparing the volume of the affected part in the second pulmonary medical image with the volume of the affected part in the first pulmonary medical image to determine the trend of the volume change of the affected part;
in step S44, the information of the trend of the lung disease of the subject is determined according to the trend of the volume change of the affected part.
In this embodiment, a second pulmonary medical image of the subject is obtained, where the second pulmonary medical image and the first pulmonary medical image in the foregoing embodiment are pulmonary medical images of the same subject at different periods, and the volume of the affected part in the second pulmonary medical image is compared with the volume of the affected part in the first pulmonary medical image to determine the trend of the change in the volume of the affected part; and determining the development trend information of the lung diseases of the detected object according to the volume change trend of the affected part.
For example, the disease condition of the subject may be aggravated or alleviated over time, and therefore, the development trend information of the lung disease of the subject may be determined based on the lung medical images of different time periods. Specifically, the ID of the object is first obtained, and a second pulmonary medical image of the object is obtained from the ID of the object, where the second pulmonary medical image may be generated at a time earlier than or later than the first pulmonary medical image, as long as the generation times of the first pulmonary medical image and the second pulmonary medical image are different, and in addition, the interval between the generation times of the first pulmonary medical image and the second pulmonary medical image is not less than a certain specific value, such as 48 hours, considering that the time span is too small and the change of the medical condition is not obvious. Fig. 4B is a schematic diagram illustrating evaluation of the novel coronary pneumonia, which includes a comparison result between the first pulmonary medical image and the second pulmonary medical image, as shown in fig. 4B, after the second pulmonary medical image of the subject is obtained, a volume of an affected part in the second pulmonary medical image is obtained, and then the volume of the affected part in the second pulmonary medical image is compared with the volume of the affected part in the first pulmonary medical image to determine a trend of change in the volume of the affected part, and a trend information of the pulmonary disease of the subject is determined according to the trend of change in the volume of the affected part. For example, in fig. 4B, as can be seen from the novel pneumonia assessment interface on the right side of the figure, the volume of the affected part of the right lung decreases from 20% to 10%, and the volume of the affected part of the left lung decreases from 30% to 20%, that is, the volume of the affected part decreases with time, and the lung disease of the subject is determined to be less ill. It will be appreciated that if the affected area increases in volume over time, it is determined that the subject is suffering from an increased lung disease. Furthermore, the trend of the volume of the affected area can be represented in a more intuitive manner, for example, the arrow represents the trend of the volume of the affected area, and the arrow represents the trend of the volume of the affected area in combination with specific numerical values.
The beneficial effect of this embodiment lies in: the volume change trend of the affected part can be judged based on different lung medical images of the same examined object, so that the development trend information of lung diseases of the examined object is automatically determined according to the volume change trend of the affected part.
In one embodiment, the above step S34 can be implemented as the following steps C1-C2:
in step C1, determining a first diagnosis result of the subject when the volume of the affected part conforms to the first development trend;
in step C2, a second diagnostic result of the subject is determined when the volume of the affected site corresponds to the second trend.
When the volume of the affected part accords with the first development trend, determining a first diagnosis result of the detected object;
for example, assuming that the first pulmonary medical image is generated later in time than the second pulmonary medical image, the volume of the affected site is reduced when the volume of the affected site in the first pulmonary medical image is smaller than the volume of the affected site in the second pulmonary medical image. Assuming that the first pulmonary medical image is generated earlier in time than the second pulmonary medical image, the volume of the affected part is reduced when the volume of the affected part in the first pulmonary medical image is larger than the volume of the affected part in the second pulmonary medical image. When the volume of the affected part is reduced, a first diagnosis result of the detected object is determined, namely the disease condition of the detected object is reduced.
When the volume of the affected part accords with a second development trend, determining a second diagnosis result of the detected object;
assuming that the first pulmonary medical image is generated later in time than the second pulmonary medical image, the volume of the affected site increases when the volume of the affected site in the first pulmonary medical image is larger than the volume of the affected site in the second pulmonary medical image. Assuming that the first pulmonary medical image is generated earlier in time than the second pulmonary medical image, the volume of the affected part is increased when the volume of the affected part in the first pulmonary medical image is smaller than the volume of the affected part in the second pulmonary medical image. When the volume of the affected part is increased, a second diagnosis result of the detected object is determined, namely the disease condition of the detected object is increased.
In one embodiment, the method may also be implemented as the following steps D1-D2:
in step D1, acquiring generation times of the first lung medical image and the second lung medical image;
in step D2, the disease progression rate of the subject is calculated from the generation time and the trend of the volume change of the affected part.
In this embodiment, generation time of the first pulmonary medical image and the second pulmonary medical image may be obtained, a generation time interval of the first pulmonary medical image and the second pulmonary medical image is determined according to the generation time, and then a volume variation amplitude of the affected part in unit time is calculated based on the time interval and the volume variation amplitude of the affected part, so as to obtain a disease development rate of the object to be examined.
In one embodiment, the method may also be implemented as steps E1 and/or E2-E3 as follows:
in step E1, rendering the first pulmonary medical image based on the single color to generate a third pulmonary medical image, wherein the rendered color depth is positively correlated with the CT value;
rendering the first pulmonary medical image based on a plurality of colors to generate a fourth pulmonary medical image, wherein different CT values are rendered by different types of colors in step E2;
in step E3, the first lung medical image, the third lung medical image and/or the fourth lung medical image are output.
In this embodiment, in order to verify the accuracy of the CT value interval segmentation, the volume of a lesion may be displayed according to the CT value interval selected by the user and visually displayed in a "rendering" manner, specifically, the first pulmonary medical image is rendered based on a single color to generate a third pulmonary medical image, where the rendered color depth is positively correlated with the CT value; then rendering the first pulmonary medical image based on a plurality of colors to generate a fourth pulmonary medical image, wherein different CT values are rendered through different types of colors; the first, third and fourth pulmonary medical images are then output. The specific output image format can be as shown in fig. 4C, and the left side is a first lung medical image of the subject, in this example, the first lung medical image is a CT image of a chest containing lungs, and in the middle cross-sectional view, the first lung medical image is rendered by a color, and different CT values adopt different depths, for example, the higher the CT value is, the darker the color is. Of course, it is understood that the higher the CT value, the lighter the color can be set. The cross-sectional view on the right side is marked with a changing color. For example, a plurality of CT value sections may be provided, and a region falling within a section with a low CT value is rendered by blue, and a region falling within a section with a high CT value is rendered by red.
It is to be understood that, in step E3, only the first lung medical image and the third lung medical image may be output, only the first lung medical image and the fourth lung medical image may be output, and the first lung medical image, the third lung medical image and the fourth lung medical image may be output simultaneously.
In one embodiment, the method may also be implemented as steps F1-F2:
in step F1, rendering the plurality of lung medical images by a plurality of colors, wherein different CT values and/or CT value ranges in the rendered lung medical images correspond to different colors;
in step F2, the rendered plurality of lung medical images are output.
In this embodiment, the lung medical images of the same patient in different courses of disease can be rendered, and the comparison effect is enhanced, for example, the lung medical images of the same subject for three consecutive days are rendered in multiple colors, the portions of different CT values and/or CT value ranges in the rendered lung medical images correspond to the different colors, and then the rendered lung medical images are output. Therefore, the CT image with the main colors of black and white is rendered into a color image, the image effect is enhanced, the rendered lung medical images of the same detected object in different courses of disease are obtained, and the medical conditions in different courses of disease can be conveniently compared.
In addition, for different diseases, a comparison schematic diagram of the normal lung CT value and the specific disease lung CT value distribution may be given, for example, for a novel coronary pneumonia, a chest CT image of a large number of healthy people may be analyzed, lung CT value data of a normal people may be given as a baseline reference, a histogram may be drawn, and a joint intersection, a Hellinger coefficient, and the like of the healthy people and the patient CT value distribution may be provided for comparison by a doctor, and a specific comparison schematic diagram is shown in fig. 4D. The CT histogram with large change range is the histogram corresponding to the novel coronary virus pneumonia, and the severity of the current novel coronary virus pneumonia can be accurately and rapidly evaluated according to the histogram.
Fig. 5 is a block diagram of a medical image-based diagnostic information interaction apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus includes:
a first acquiring module 51, configured to acquire a first pulmonary medical image of a subject;
a second obtaining module 52, configured to obtain image parameters of an affected part in the first lung medical image;
and the determining module 53 is configured to determine a disease level of the lung of the object to be examined corresponding to the first lung medical image information according to the image parameter of the affected part.
In one embodiment, the second obtaining module includes:
and the input submodule is used for inputting at least one first lung medical image into the neural network so as to determine the volume of the affected part in the first lung medical image.
In one embodiment, the neural network comprises:
the device comprises a first detection model for detecting candidate patch shadows, a cutting model, a second detection model for detecting patch shadow intervals and a volume calculation model for calculating the volume of an affected part;
an input submodule for:
enabling the at least one first lung medical image to pass through N continuous convolution feature extraction modules in a first detection model, so that the N continuous convolution feature extraction modules obtain image features of patch images in the first lung medical image, wherein N is a positive integer;
inputting image features of an affected part in the first lung medical image into a full-link layer in a first detection model, so that the full-link layer outputs candidate patch images based on the image features;
enabling the candidate patch image to pass through a cutting model so that the cutting model performs multiple cutting on the candidate patch image in different directions in space to obtain multiple section images of the candidate patch image in multiple directions in space;
enabling a plurality of continuous section images to pass through M continuous convolution feature extraction modules in a second detection model, so that the M continuous convolution feature extraction modules obtain the image features of the section images, wherein M is a positive integer;
inputting the image characteristics of the section image into a full-connection layer in a second detection model so that the full-connection layer outputs patch image information based on the image characteristics;
and passing the patch image information through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
In one embodiment, the determining module includes:
the comparison submodule is used for comparing the volume of the affected part with a target relation table, wherein the target relation table stores the corresponding relation between the volume of the affected part and the disease grade;
and the first determining submodule is used for determining and outputting the disease grade of the lung of the detected object according to the comparison result.
In one embodiment, the determining module includes:
the calculation submodule is used for calculating the volume proportion of the affected part in the lung;
and the input submodule is used for inputting the volume of the affected part and the volume ratio of the affected part in the lung into a disease grade calculation model so as to obtain a disease grade of the lung of the detected object, which is obtained by the disease grade calculation model based on comprehensive calculation of the volume of the affected part and the volume ratio of the affected part in the lung.
In one embodiment, the apparatus further comprises:
the third acquisition module is used for acquiring a second lung medical image of the detected object;
a fourth obtaining module, configured to obtain a volume of an affected part in the second pulmonary medical image;
the comparison module is used for comparing the volume of the affected part in the second pulmonary medical image with the volume of the affected part in the first pulmonary medical image so as to determine the volume change trend of the affected part;
and the change trend determining module is used for determining the development trend information of the lung diseases of the detected object according to the volume change trend of the affected part.
In one embodiment, the trend of change determination module includes:
the second determination submodule is used for determining a first diagnosis result of the detected object when the volume of the affected part accords with the first development trend;
and the third determining submodule is used for determining a second diagnosis result of the detected object when the volume of the affected part accords with a second development trend.
In one embodiment, the apparatus further comprises:
a fifth acquiring module, configured to acquire generation times of the first pulmonary medical image and the second pulmonary medical image;
and the calculation module is used for calculating the disease development speed of the detected object according to the generation time and the volume change trend of the affected part.
In one embodiment, the apparatus further comprises:
a first rendering module, configured to render the first pulmonary medical image based on a single color to generate a third pulmonary medical image, where a rendered color depth is positively correlated with a CT value;
a second rendering module, configured to render the first pulmonary medical image based on multiple colors to generate a fourth pulmonary medical image, where different CT values are rendered by different types of colors;
a first output module for outputting the first, third and/or fourth pulmonary medical images.
In one embodiment, the apparatus further comprises:
the third rendering module is used for rendering the plurality of lung medical images through a plurality of colors, and parts of different CT values and/or CT value ranges in the rendered lung medical images correspond to different colors;
and the second output module is used for outputting the rendered plurality of lung medical images.
The present invention also provides a non-transitory readable storage medium, in which instructions, when executed by a processor in a device, enable the device to perform a medical image-based diagnostic information interaction method, the method comprising:
acquiring a first lung medical image of a detected object;
acquiring image parameters of an affected part in the first lung medical image;
and outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part.
The instructions in the storage medium may be further executable to:
the acquiring of the image parameters of the affected part in the first pulmonary medical image includes:
at least one first pulmonary medical image is input into the neural network to determine the volume of the affected part in the first pulmonary medical image.
The instructions in the storage medium may be further executable to:
the neuron network includes:
the device comprises a first detection model for detecting candidate patch shadows, a cutting model, a second detection model for detecting patch shadow intervals and a volume calculation model for calculating the volume of an affected part;
inputting at least one first pulmonary medical image into a neural network to determine a volume of an affected site in the first pulmonary medical image, comprising:
enabling the at least one first lung medical image to pass through N continuous convolution feature extraction modules in a first detection model, so that the N continuous convolution feature extraction modules obtain image features of patch images in the first lung medical image, wherein N is a positive integer;
inputting image features of an affected part in the first lung medical image into a full-link layer in a first detection model, so that the full-link layer outputs candidate patch images based on the image features;
enabling the candidate patch image to pass through a cutting model so that the cutting model performs multiple cutting on the candidate patch image in different directions in space to obtain multiple section images of the candidate patch image in multiple directions in space;
enabling a plurality of continuous section images to pass through M continuous convolution feature extraction modules in a second detection model, so that the M continuous convolution feature extraction modules obtain the image features of the section images, wherein M is a positive integer;
inputting the image characteristics of the section image into a full-connection layer in a second detection model so that the full-connection layer outputs patch image information based on the image characteristics;
and passing the patch image information through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
The instructions in the storage medium may be further executable to:
outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part, wherein the step comprises the following steps:
comparing the volume of the affected part with a target relation table, wherein the target relation table stores the corresponding relation between the volume of the affected part and the disease grade;
and determining and outputting the disease grade of the lung of the detected object according to the comparison result.
The instructions in the storage medium may be further executable to:
outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part, wherein the step comprises the following steps:
calculating the volume ratio of the affected part in the lung;
inputting the volume of the affected part and the volume ratio of the affected part in the lung into a disease grade calculation model to obtain a disease grade of the lung of the detected object, which is obtained by the disease grade calculation model based on the comprehensive calculation of the volume of the affected part and the volume ratio of the affected part in the lung.
The instructions in the storage medium may be further executable to:
acquiring a second lung medical image of the detected object;
acquiring the volume of the affected part in the second lung medical image;
comparing the volume of the affected part in the second pulmonary medical image with the volume of the affected part in the first pulmonary medical image to determine the volume change trend of the affected part;
and determining the development trend information of the lung diseases of the detected object according to the volume change trend of the affected part.
The instructions in the storage medium may be further executable to:
determining the development trend of the lung diseases of the detected object according to the volume change trend of the affected part, comprising the following steps:
when the volume of the affected part accords with a first development trend, determining a first diagnosis result of the detected object;
and when the volume of the affected part accords with a second development trend, determining a second diagnosis result of the detected object.
The instructions in the storage medium may be further executable to:
acquiring the generation time of the first lung medical image and the second lung medical image;
and calculating the disease development speed of the detected object according to the generation time and the volume change trend of the affected part.
The instructions in the storage medium may be further executable to:
rendering the first pulmonary medical image based on a single color to generate a third pulmonary medical image, wherein the rendered color depth is positively correlated with the CT value; and/or
Rendering the first pulmonary medical image based on a plurality of colors to generate a fourth pulmonary medical image, wherein different CT values are rendered by different types of colors;
outputting the first, third and/or fourth pulmonary medical images.
The instructions in the storage medium may be further executable to:
rendering the plurality of lung medical images through a plurality of colors, wherein parts of different CT values and/or CT value ranges in the rendered lung medical images correspond to different colors;
and outputting the rendered plurality of lung medical images.
The present invention also provides a non-transitory readable storage medium having instructions that, when executed by a processor within a device, enable the device to perform a method of displaying a diagnostic information interface, the method comprising:
forming a first graph based on first data, wherein the first graph is represented by a first color, and the first data is CT value density data of a region of interest in a first target CT image;
forming a second graph based on the second data; wherein the second graphic is represented in a second color;
and determining the overlapping part of the first graph and the second graph, and representing the overlapping part by a third color.
The instructions in the storage medium may be further executable to:
the forming a first pattern based on the first data includes:
the first data is determined in response to acquiring CT value density data for a region of interest in the first target CT image.
The second data is reference data of a region of interest in the CT image.
The second data is region-of-interest CT value density data in a second target CT image acquired at a different time than the first target CT image.
The region of interest is comprised in at least one of:
human pulmonary organs, left lung, right lung, upper right lobe of the right lung, middle right lobe of the right lung, lower right lobe of the right lung, upper left lobe of the left lung, and lower left lobe of the left lung.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (11)

1. A display method of a diagnostic information interface comprises the following steps:
forming a first graph based on first data, wherein the first graph is represented by a first color, and the first data is CT value density data of a region of interest in a first target CT image;
forming a second graph based on the second data; wherein the second graphic is represented in a second color;
and determining the overlapping part of the first graph and the second graph, and representing the overlapping part by a third color.
2. The method of claim 1, the forming a first graphic based on first data, comprising:
the first data is determined in response to acquiring CT value density data for a region of interest in the first target CT image.
3. The method of claim 1, wherein the second data is reference data for a region of interest in a CT image.
4. The method of claim 1, wherein the second data is region of interest CT value density data in a second target CT image acquired at a different time than the first target CT image.
5. The method of claim 3 or 4, wherein the region of interest is comprised in at least one of:
human pulmonary organs, left lung, right lung, upper right lobe of the right lung, middle right lobe of the right lung, lower right lobe of the right lung, upper left lobe of the left lung, and lower left lobe of the left lung.
6. A diagnostic information interaction method based on medical images is characterized by comprising the following steps:
acquiring a first lung medical image of a detected object;
acquiring image parameters of an affected part in the first lung medical image;
and outputting the disease grade of the lung of the detected object corresponding to the first lung medical image information according to the image parameters of the affected part.
7. The method of claim 6, wherein said obtaining image parameters of an affected site in said first pulmonary medical image comprises:
enabling the at least one first lung medical image to pass through N continuous convolution feature extraction modules in a first detection model in a neuron network, so that the N continuous convolution feature extraction modules obtain image features of patch images in the first lung medical image, wherein N is a positive integer;
inputting image features of an affected part in the first lung medical image into a full-link layer in a first detection model, so that the full-link layer outputs candidate patch images based on the image features;
enabling the candidate patch image to pass through a cutting model so that the cutting model performs multiple cutting on the candidate patch image in different directions in space to obtain multiple section images of the candidate patch image in multiple directions in space;
enabling a plurality of continuous section images to pass through M continuous convolution feature extraction modules in a second detection model, so that the M continuous convolution feature extraction modules obtain the image features of the section images, wherein M is a positive integer;
inputting the image characteristics of the section image into a full-connection layer in a second detection model so that the full-connection layer outputs patch image information based on the image characteristics;
and passing the patch image information through the volume calculation model, so that the volume calculation model calculates the volume of the affected part in the first lung medical image.
8. The method of claim 6, wherein outputting the disease level of the lung of the subject corresponding to the first pulmonary medical image information according to the image parameter of the affected part comprises:
comparing the volume of the affected part with a target relation table, wherein the target relation table stores the corresponding relation between the volume of the affected part and the disease grade;
determining and outputting the disease grade of the lung of the detected object according to the comparison result;
or
Calculating the volume ratio of the affected part in the lung;
inputting the volume of the affected part and the volume ratio of the affected part in the lung into a disease grade calculation model to obtain a disease grade of the lung of the detected object, which is obtained by the disease grade calculation model based on the comprehensive calculation of the volume of the affected part and the volume ratio of the affected part in the lung.
9. The method of claim 7, wherein the method further comprises:
acquiring a second lung medical image of the detected object;
acquiring the volume of the affected part in the second lung medical image;
comparing the volume of the affected part in the second pulmonary medical image with the volume of the affected part in the first pulmonary medical image to determine the volume change trend of the affected part;
and determining the development trend information of the lung diseases of the detected object according to the volume change trend of the affected part.
10. The method of claim 6, wherein the method further comprises:
rendering the first pulmonary medical image based on a single color to generate a third pulmonary medical image, wherein the rendered color depth is positively correlated with the CT value; and/or
Rendering the first pulmonary medical image based on a plurality of colors to generate a fourth pulmonary medical image, wherein different CT values are rendered by different types of colors;
outputting the first, third and/or fourth pulmonary medical images.
11. A non-transitory readable storage medium in which instructions, when executed by a processor within a device, enable the device to perform a method of displaying a diagnostic information interface or a method of medical image-based diagnostic information interaction, the method comprising:
the method of any one of claims 1 to 5; or
The method of any one of claims 6 to 10.
CN202010083597.6A 2020-02-05 2020-02-07 Display method, interaction method and storage medium of diagnostic information interface Pending CN111261285A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010083597.6A CN111261285A (en) 2020-02-07 2020-02-07 Display method, interaction method and storage medium of diagnostic information interface
EP21751295.3A EP4089688A4 (en) 2020-02-05 2021-02-05 Medical imaging-based method and device for diagnostic information processing, and storage medium
PCT/CN2021/075379 WO2021155829A1 (en) 2020-02-05 2021-02-05 Medical imaging-based method and device for diagnostic information processing, and storage medium
US17/760,185 US20230070249A1 (en) 2020-02-05 2021-02-05 Medical imaging-based method and device for diagnostic information processing, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010083597.6A CN111261285A (en) 2020-02-07 2020-02-07 Display method, interaction method and storage medium of diagnostic information interface

Publications (1)

Publication Number Publication Date
CN111261285A true CN111261285A (en) 2020-06-09

Family

ID=70952701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010083597.6A Pending CN111261285A (en) 2020-02-05 2020-02-07 Display method, interaction method and storage medium of diagnostic information interface

Country Status (1)

Country Link
CN (1) CN111261285A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021155829A1 (en) * 2020-02-05 2021-08-12 杭州依图医疗技术有限公司 Medical imaging-based method and device for diagnostic information processing, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040101176A1 (en) * 2002-11-27 2004-05-27 General Electric Company Method and system for measuring disease relevant tissue changes
CN1639739A (en) * 2002-03-04 2005-07-13 西门子共同研究公司 A graphical user interface of object consistency in CT volume image sets
CN105956386A (en) * 2016-04-27 2016-09-21 深圳市智影医疗科技有限公司 Health indicator index classification system and method based on chest radiography of healthy people
CN106909778A (en) * 2017-02-09 2017-06-30 北京市计算中心 A kind of Multimodal medical image recognition methods and device based on deep learning
CN107392893A (en) * 2017-06-30 2017-11-24 上海联影医疗科技有限公司 Tissue density's analysis method and system
CN108615237A (en) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 A kind of method for processing lung images and image processing equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1639739A (en) * 2002-03-04 2005-07-13 西门子共同研究公司 A graphical user interface of object consistency in CT volume image sets
US20040101176A1 (en) * 2002-11-27 2004-05-27 General Electric Company Method and system for measuring disease relevant tissue changes
CN105956386A (en) * 2016-04-27 2016-09-21 深圳市智影医疗科技有限公司 Health indicator index classification system and method based on chest radiography of healthy people
CN106909778A (en) * 2017-02-09 2017-06-30 北京市计算中心 A kind of Multimodal medical image recognition methods and device based on deep learning
CN107392893A (en) * 2017-06-30 2017-11-24 上海联影医疗科技有限公司 Tissue density's analysis method and system
CN108615237A (en) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 A kind of method for processing lung images and image processing equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
上海市公共卫生临床中心: "《"科学防控 AI助力"新型冠状病毒性肺炎智能评价系统在公卫中心上线》" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021155829A1 (en) * 2020-02-05 2021-08-12 杭州依图医疗技术有限公司 Medical imaging-based method and device for diagnostic information processing, and storage medium

Similar Documents

Publication Publication Date Title
JP6877868B2 (en) Image processing equipment, image processing method and image processing program
CN104217418B (en) The segmentation of calcification blood vessel
CN108615237A (en) A kind of method for processing lung images and image processing equipment
CN111261284A (en) Medical image-based diagnostic information processing method and device and storage medium
US11810243B2 (en) Method of rendering a volume and a surface embedded in the volume
JP2015528372A (en) System and method for automatically detecting pulmonary nodules in medical images
JP2002503861A (en) Automatic drawing method and system of lung region and rib diaphragm angle in chest radiograph
CN110189307B (en) Pulmonary nodule detection method and system based on multi-model fusion
CN111340756A (en) Medical image lesion detection and combination method, system, terminal and storage medium
EP4022562A1 (en) Image processing for stroke characterization
CN113139948A (en) Organ contour line quality evaluation method, device and system
CN111160812B (en) Diagnostic information evaluation method, display method, and storage medium
US20230070249A1 (en) Medical imaging-based method and device for diagnostic information processing, and storage medium
CN113168537A (en) Segmentation of deep neural networks
CN108399354A (en) The method and apparatus of Computer Vision Recognition tumour
CN111261285A (en) Display method, interaction method and storage medium of diagnostic information interface
WO2020235461A1 (en) Abnormality detection method, abnormality detection program, abnormality detection device, server device, and information processing method
CN114375461A (en) Inspiratory metrics for chest X-ray images
Wei et al. Automatic recognition of major fissures in human lungs
CN115690556A (en) Image recognition method and system based on multi-modal iconography characteristics
JP5954846B2 (en) Shape data generation program, shape data generation method, and shape data generation apparatus
CN114387380A (en) Method for generating a computer-based visualization of 3D medical image data
JP2020113275A (en) Lung analysis and reporting system
CN111383218B (en) Medical image-based diagnostic information processing method and storage medium
CN108765415A (en) There is one kind shade management to monitor system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination